Monday, October 24, 2011

Making a TV for Steve

The TV space is ripe for disruption, and I keep seeing speculation about an Apple Television. In his biography interviews, Steve Jobs claimed to have "finally cracked" the TV user experience.

Last week I presented my start-up ZigFu at the Microsoft Venture Capital Summit and said that motion-control technology like Kinect will be as disruptive to the TV as the remote-control or possibly even color (Video at the end of this post). We want to make a TV with a motion-controlled user experience that would have made Steve Jobs proud to demo it. A TV designed for Steve would take you no more than a few gestures to watch a movie, or search for a video or song. You'd never be that far away from Netflix, Amazon, eBay, or even ordering a pizza with gestures.

In addition to a world-class TV user experience using gestures, the other major differentiating factor for success in the gesture controlled smart TV race is an app eco-system. It's forming now. You might have noticed the storm of Kinect-hacks coming out of academia or indie studios. You might also have read my other posts that I am running a fresh-out-of-YCombinator startup supporting developers with UI libraries, tools and an app-store for motion-controlled apps.

I may be stirring my own kool-aid, but I feel like I aught to put a stake in the ground now, especially while I'm busy hustling to find investors (contact amir at zigfu.com). At any given time, there's only a few next-big-things in technology and right now motion-control is one of them. Kinect was huge, and from the insider's perspective it looks like there's a tsunami coming that will transform the way we consume and interact with media.

The next wave of computer-user interaction to come after touch will be motion. We will take for granted that computers can see us and react to our movements. Motion will be a defining feature of the smart TV category in much the same way touch was to smart phones. Motion, combined with voice and smartphone control, will be as ubiquitous as the remote control.

The evolution of TV will have apps like Facebook, Skype, and Youtube and of course features like video sharing. These applications will make TV a more social experience (this post is fully buzz-word compliant). Where these apps would kinda suck with a remote, these new apps plus all the existing cool features we already have on TV can be made much better with motion-control. Football fans know what I mean when I say I wish I could play replays with something better than the remote. We want to reimagine the DVR experience with motion-control. Many totally new applications will emerge in this eco-system as well. We are already seeing several groups making applications for trying on clothes in a virtual fitting room and playing motion games that will be bought through an app-store.

The network is mature enough for these applications to emerge. The technology exists so that we can build it all today, but it is not yet accessible to developers. We're still at the beginning of the development of this control paradigm and we haven't settled on best practices for interacting with devices with gestures. Kinect needs something like Cocoa to provide lists and menus. That's what ZigFu is building.

Tracking the nerd news on the inter-tubes, I've noticed an onslaught of press releases about Xbox Live with TV channels and Kinect control, and several other companies with controllers using Wii-like controls. Speculation is abound that some Apple Television product that may eventually be announced with voice-control features like Siri. Maybe Google will play in this space and provide a motion layer to their Google TV offering: with Motorola they acquired a major player in the set-top-box space. We think this eco-system is new and different enough from anything that's come before it. Perhaps our start-up can run with the flag and beat the big guys to the market.

Once it is obviously a differentiating factor in TV sales, all the vendors will embrace the category of motion controlled smart TVs with apps and games. Some combination of a camera and microphone array will be integrated into every television and your smartphone or tablet touchscreen will be able to talk to the TV too to transmit control signals.

After I presented at YCombinator's demo day, someone approached me and said "your demo has Apple written all over it." And that's pretty much what we set out to do, we're making a TV for Steve.

Here's a video of my presentation about the work we're doing at ZigFu:



My demo isn't totally polished yet, but you can see where we're going.

Wednesday, August 24, 2011

ZigFu - Motion Apps

Well, the cat's out of the bag. We've been doing YCombinator this summer developing a motion apps company we're calling ZigFu. Something's gotta pay for all the lasers...

Today was the YC demo day and we got some nice press today in GigaOM and Forbes which put us on their short-lists.

And that's one demo day down... still another one tomorrow, but now they'll already be anticipating something awesome so I'm probably going to have to turn it up to 11. Maybe I'll do the Hokey Pokey for them (we had Paul Graham do the Hokey Pokey during our YC interview ;)

So what's ZigFu (for great justice). This blog has a bunch of post showing the Unity bindings for OpenNI. Over the summer we've built a set of UI Components for developing apps using motion control and the plan is to launch a portal for motion apps next.

The goal here is to integrate a ton of existing computer vision stuff into the OpenNI framework so we can track hands and skeletons and faces. We think the motion sensing technology will make a whole new market for smart TV apps, and we're forming the platform-independent vendor required to make the application layer that sites above the hardware and computer vision middleware.

We want to make the remote control obsolete.

Friday, March 11, 2011

The Persistence of Data: Why the future is write-once

I went to an interesting discussion called "Big IT meets Big Web" today hosted by Battery Ventures which brought together a bunch of nerds to talk about the harmonic convergences of IT and Web worlds and deploying services in the cloud. One idea that struck me, was that many of these companies have huge data stores that are never erased. This means we have too many features in the magnetic disks and solid state drives we are using today: they do not need to be erasable/reprogrammable. The vast majority of data storage they need is "Write Once Read-Many" or WORM.

Crystal ball: in the near future, reprogrammable flash ROM will be replaced with write-once PROM. PROMs will be cheaper if designed as a write-once fuse array structures. A crossbar and fuse-array can be higher density than a floating gate flash ROM design. It may also require simpler processing steps--though the row decoder and sense amplifiers required for both types of storage may be the determining constrain in that regard.

Wednesday, March 02, 2011

Cutting with Kinect





( tinkerheavy.com ) Using Kinect to make Little Red Riding Hood cut up stuff. Fruit Ninjas and Veggie Samurais better get ready for the Sushi Wars.

I'm using the Object Slicer package and my Unity wrapper for OpenNI.NET / Kinect.

Thursday, January 27, 2011

Friday, January 21, 2011

More Kinect Hacking

(tinkerheavy.com) This is a demonstration of the Unity wrapper for OpenNI. My skeleton is tracked as I move in front of the Kinect. My feet are rigid-bodies that can kick the boxes. The boxes are also carryable if two hands interact with it simultaneously.

If you already have OpenNI+NITE working with your kinect, the binary for this project (pc and mac) is here:http://tinkerheavy.com/unityskeleton.zip
To get the others working you'll need to get OpenNI and NITE from OpenNI.org (to use the kinect get sensorkinect from https://github.com/avin2/SensorKinect instead of the openni.org sensor module)

The code and unity project are available on my git:
https://github.com/tinkerer/UnityWrapper