SeeMove is an object, pose and gesture recognition technology aimed at end users and developers. The launch demo pays homage to the interfaces used by Tony Stark in Iron Man and Tom Cruise in Minority Report. A video showing SeeMove in action is on
Unlike it’s competitors SeeMove can see intricate detail from a distance, enabling tracking of objects as well as people, whilst not being limited to the desktop, a specific camera, operating system or device. Founder Evan Grant says: “SeeMove is a system that can learn and track anything that moves!”
The team behind SeeMove have succeeded in tracking greater complexity with the Microsoft Kinect camera than Microsoft itself has demonstrated.
SeeMove can learn to recognise anything. Enabling users to train their own objects and design their own poses and gestures, turning them into actions on a device: for example, playing a video on a tablet, by making a triangle hand shape or browsing the contents of a phone by holding it in front of a camera. Grant continues: “Existing systems are concerned with trying to understand what they are seeing, for example is it a hand, an arm and so on. Seemove doesn’t care what it is, it can learn anything, so it becomes more intelligent the more information it’s given.”
In the launch demo you can, browse videos, view pictures and draw 3D models on your devices, however Grant continues: “Rather than a vision of a new interface this is a demo of SeeMove’s underlying technology and what it can do, we’re keen to get it into the hands of other developers to see how they’d like to use it!”
Seemove will be released as a middleware SDK allowing developers to create their own experiments and applications. “We’d love to evolve this technology for different uses, such as sign language tracking” commented Grant as just one example of a wide range of uses being explored including games, retail, advertising, health care, education, manufacturing, engineering, design and much more.