DESIGN ISN T ALWAYS SIMPLE...BUT IT CAN BE SIMPLIFIED!

January 31 – February 3, 2011 Santa Clara Convention Center Santa Clara, California EE Times THE NEWS SOURCE FOR THE CREATORS OF TECHNOLOGY issue 15...
Author: Jane Malone
2 downloads 0 Views 2MB Size
January 31 – February 3, 2011 Santa Clara Convention Center Santa Clara, California

EE Times THE NEWS SOURCE FOR THE CREATORS OF TECHNOLOGY

issue 1593 Monday, DECEMBER 13, 2010 www.eetimes.com

DESIGN ISN’T ALWAYS SIMPLE... ...BUT IT CAN BE SIMPLIFIED! Join us at the definitive event for chip, board and systems designers. Register today with Priority Code EETimes and Save 20% on your conference registration or get a free expo pass. www.designcon.com 2011 Conference Highlights 95+ tutorials and technical paper sessions covering design issues for power, RF, analog and other applications Agilent Educational Forum Industry Keynotes Product Spotlights Chiphead Theater Presentations DesignVision Awards IP and PCB Summit Events Free Sponsorship Sessions and Technical Training

Official Host Sponsor

FEATURE

3-Dgesture control breaks out of the game box By R. Colin Johnson

THIS COULD BE THE YEAR 3-D gesture recognition proves it’s not just child’s play. Several years after its first consumer market appearance in the wireless gaming interface for Nintendo’s Wii, MEMS sensor-based gesture recognition is extending its reach to smartphones and is set to take hold of that most iconic of consumer interfaces: the TV remote. Since the Wii’s 2006 release, Nintendo’s competitors have spun their own versions of 3-D gesture recognition and processing. Sony tuned the Move Playstation controller for hardcore gamers seeking pinpoint accuracy; Microsoft took the gaming interface hands-free with the Xbox Kinect. Apple was the first to pick up on microelectromechanical sensors’ potential for building more intuitive smartphone interfaces; it added MEMS accelerometers to the iPhone in 2007 and a MEMS gyroscope in 2010. Its competitors have fol-

lowed suit, and soon 3-D commands such as shake-to-undo, lift-to-answer and face-down-to-disconnect will be standard smartphone fare. Today, consumer OEMs are adding 3-D gesture recognition across their product lines. Some are using camera-based techniques licensed from GestureTek Inc. (Sunnyvale, Calif.); others have licensed MEMS approaches from Hillcrest Laboratories Inc. (Rockville, Md.) or Movea Inc. (Grenoble, France). Movea holds more than 250 related patents, covering such techniques as the use of a gyroscope to control cursors; Hillcrest holds more than 100, including a patent on the use of an accelerometer with a gyroscope for tracking motion. Both companies also offer value-added software development tools for 3-D gesture designers (Movea’s Gesture Builder and Hillcrest’s Freespace MotionStudio).

In 2009, the Massachusetts Institute of Technology demonstrated how the addition of a layer of photodiodes to the backside of a liquid-crystal display lets the LCD recognize hand gestures made in front of the screen. SOURCE: MIT, EE Times

24

Electronic Engineering Times January 17, 2011

An actor (center) wearing Xsens’ MVN Motion Capture lycra suit—which is studded with MEMS inertial sensors from Analog Devices—mimics the pose of a character in a Marvel Comics “Iron Man” illustration (l.) as the basis for creating an animation for Paramount Pictures’ “Iron Man 2” (r.). SOURCE (l.-r.): Marvel, Xsens, Paramount

Google, for its part, has added MEMS-based gesture recognition application programming interfaces to the Gingerbread release of the Android OS, which recognizes such gestures as tilt, spin, thrust and slice. “Motion processing has finally been accepted by the mainstream,” said Steve Nasiri, founder of InvenSense Inc. (Sunnyvale), the first MEMS chip maker to combine an accelerometer and gyroscope on one die. “We predict that the hardware for motion processing and gesture recognition will become as ubiquitous in smartphones as the camera module.” InvenSense’s gyroscopes and accelerometer/gyroscope combo chips embed a motion processor to execute the complex sensor fusion algorithms necessary to recognize a user’s gestures, offloading the task from the application processor. The company plans to combine an accelerometer, gyroscope and magnetometer (e-compass) on a single die by next year. The InvenSense Motion Processing Library turned up at the International Consumer Electronics Show in both the first television remote control to harness 3-D gesture recognition and the first smartphone to apply it for primary phone functions (such as answering the phone merely by lifting it to the ear). Both are LG Electronics products. The Magic Motion remote is used with LG’s Infinia line of 3-D TVs. LG’s 9.2-millimeter-wide Optimus Black smartphone, touted as slimmest available smartphone, recognizes several unique gesturebased commands. Other MEMS chip makers have likewise incorporated gesture recognition algorithms into their accelerometers and gyroscopes. Kionix Inc. (Ithaca, N.Y.), for example, offers dozens of models with built-in gesture-recognition algorithms, and its Gesture Designer software development suite lets OEMs design their own gesture-based controls. “The TV remote-control guys are making a big push to bring gesture recognition into its own, requiring very sophisticated use of motion,” said Kionix CEO Greg Galvin. “The

holy grail here is the convergence of audiovisual input into the TV, allowing you to change channels, download music, look at your library of photos, do texting or surf the Internet, all with a single controller.” Running apps and navigating Web content on an IPTV require a remote control that is capable of mouse-like accuracy for both point-and-click and gesture-based control. “For these apps, you need MEMS,” Galvin said. Hillcrest founder and CEO Dan Simpkins proclaimed 2011 “the year of the smart TV,” adding, “For the first time in more than 50 years, a new input technology has come to the market for television.” Hillcrest’s Loop pointer, an in-air mouse designed for consumers who connect their computers to a television, uses its Kylo Browser for IPTV. The Magic Motion remote that LG showed at CES uses Hillcrest’s Freespace gesture recognition technology to let users navigate complex point-and-click on-screen interfaces for Web-based and conventional TV content. Competing approaches include Philips’ uWand. For its handheld controller, Philips opted for an integrated infrared camera that senses IR beacons in the TV to enable accurate motion tracking without requiring a gyroscope. Most other IPTV remotes, however, use MEMS gyros. Movea, for instance, announced at CES that laptop keyboard vendor Sunrex Corp. (Taiwan) would have controllers later this year that would use Movea’s MEMS-based MotionIC platform and SmartMotion technology for 3-D gesture recognition. “The next generation of motion remotes will recognize all sorts of new gestures,” said Dave Rothenberg, worldwide marketing manager for Movea. “For example, Mom and Dad will be able to unlock adult content on the TV by waving their signatures in the air, whereas when the kids come in the room and do the same thing, the parental controls will be activated.” Microsoft’s Kinect, meanwhile, challenges the conventional wisdom by moving the gesture-sensing hardware, which January 17, 2011 Electronic Engineering Times

25

FEATURE

A CHANGE BELOW THE SURFACE Microsoft brought the next generation of its Surface multitouch platform to CES, ditching the five-camera setup used in the first generation to yield a thinner model that can be mounted vertically. The Surface let users directly manipulate objects displayed on the screen and has built-in algorithms for interpreting gestures for selecting, dragging, dropping, pinch-to-zoom and other touchscreen-like commands (OEMs can also develop their own algorithms). The new version “is just 4 inches thick, allowing it to be mounted horizontally, vertically or at any other angle,” said Brad Carpenter, general manager of Microsoft’s Surface team. The key is Microsoft’s PixelSense technology, created in collaboration with Samsung’s LCD Group (which will market the resultant panel as the SUR40). PixelSense adds a light sensor to each LCD pixel for either visible or infrared light in an alternating, checkerboard pattern. Visiblelight and infrared emitters in the backlight allow each corresponding pixel to sense the light reflected from users’ hands or other objects. By sampling the sensors at 60 frames/second, the technology can simultaneously track multiple users’ motions, with the number limited only by the available surface area on the screen. A built-in FPGA supports location tracking and can read application-specific tags on objects placed on the panel’s surface, as well as transfer data via infrared transmission to camera-equipped Windows 7 smartphones. For application programs, the Surface houses Advanced Micro Devices’ Athlon X2 245e 2.9-GHz dual-core processor running Windows 7 and a companion AMD Radeon HD 6750 graphics processor. “The 40-inch screen will be marketed to businesses worldwide by Samsung’s LCD Group at a price of $7,600,” Microsoft’s Carpenter said. — R. Colin Johnson

26

Electronic Engineering Times January 17, 2011

Philips’ uWand technology aims for the same applications as MEMSequipped controllers but does not use MEMS devices. Instead, it integrates an infrared camera to sense IR beacons from the TV, thus enabling motion tracking without a gyroscope. SOURCE: Philips

includes a MEMS accelerometer, out of users’ hands and into the head unit. Microsoft developed its own 3-D recognition algorithms for Kinect based on optical recognition technology licensed from GestureTek. Kinect classifies gestures within the strict confines of actions in a particular game, such as virtual volleyball. The technology segments images by projecting a regular array of infrared dots onto the player with a laser, then measures the reflected intensity of each dot. Less intensely reflected dots are assumed to be reflected from the background; more intense dots are assumed to come from the user in the foreground. Kinect then animates an avatar with its best guess of the user’s actions. A MEMS accelerometer from Kionix helps aim the cameras at the user more accurately. The technique sacrifices some accuracy in exchange for the user’s mobility, according to analysts. “I do not believe that the camera-based recogni-

tion system from Microsoft is accurate enough to satisfy many gamers, who will probably want to continue holding the controller, making the Sony Move a better candidate for hard-core gamers,” said iSuppli senior analyst Jérémie Bouchaud. “Microsoft’s solution works for the audience it is targeting: families that want to jump in and out of a game quickly and want an easy and immediate experience,” said Piers HardingRolls, head of games at IHS Screen Digest. “Sony’s Move, on the other hand, is a hybrid solution, using sensors to track motion and a camera to track position. At this stage, Sony’s theory is to target enthusiast gamers with more accurate sensor technology.” Camera-based technology like Kinect’s, said Gartner analyst Jim Tully, “is not the endgame for gesture recognition. It has a place, but accelerometer/ gyro combos also have a place. “For instance, the Kinect camera can’t

FEATURE

detect complex movements in a multiuser situation when one user is blocked by another user. It is also not so good when a user turns his back on the camera . . . These situations would need multiple cameras, [which would not be] very feasible in most situations.” But GestureTek, whose technology already tracks the 3-D motion of millions of cell phones by observing a changing camera image—claims that optical gesture recognition will eventually outperform MEMS-based devices. “Today, the resolution and accuracy of optical gesture recognition are not as good as when using MEM inertial sensors, but they’re good enough for most games,” said Vincent John Vincent, coWith OEM algorithm support, Microsoft’s Surface can read ‘hoverfounder and president of GestureTek. “And as cam- ing’ gestures. The redesigned panel can hang on a wall (page 26). era resolution gets better, we believe optical and sometimes sluggish in tracking human gestures,” said gesture recognition will eventually surpass MEMS by enabling Casper Peeters, CEO of Xsens (Los Angeles). “Our technology devices to track the movement of every part of your body, with is much more flexible in terms of where you can use it, and it pixel-level accuracy.” As much as hard-core gamers might value precision, it is less achieves higher fidelity in tracking the wearer’s precise movements. But motion-based game controllers and phone important for gaming itself than for game development or special-effects film animations. For animation pros, the Cadillac of interfaces are just beginning to emerge. Xsens operates at the other end of the spectrum, enabling high-end motion capture gesture recognition is Xsens Technologies’ MEMS-studded for precise character animation, with many more interesting bodysuit. The accelerometers and gyros on the suit enable the applications emerging in the future.” previsualization of animation sequences in real-time. Microsoft, also with an eye on the future, plans to harness Xsens uses Analog Devices’ high-precision three-axis the 3-D tracking technology it obtained when it acquired accelerometers, gyroscopes and magnetometers for detailed 3-DV and Canesta in 2009 and 2010, respectively. Those commotion tracking. Used by the pros who created the effects for panies have virtually cornered the market in time-of-flight the movie “Iron Man 2” and the PS3 game “KillZone 2,” for gesture-recognition patents, especially for mobile devices. example, the Xsens technology offers a motion capture soluTime-of-flight sensors measure the time it takes an tion that can be used anywhere, without the need for a cominfrared beam to bounce off objects and return to a special plex infrastructure. Eventually, Xsens predicts, the CMOS sensor, yielding a highly accurate 3-D depth map of technology will be cost-reduced for consumer applications, any scene at any distance and in any lighting. Time-of-flight enabling a Kinect-like experience but with far higher fidelity depth map technology also dovetails nicely with the 3-D and with no limitation on the number of players. camera-based gesture recognition algorithms Microsoft “Microsoft’s Kinect is an elegant solution, since it does not developed through its GestureTek license. require any sensors on the body, but as a result it is slower TriDiCam GmbH (Duisburg, Germany) and a few others claim to have time-offlight sensor capability. But thus far only Canesta has proved the concept, using a CMOS image sensor to create a precise 3-D image map of hands hovering just inches above a mobile device, even outside in bright sunlight. Companies such as Silicon Labs Inc. (Austin, Texas), meanwhile, have inexpensive infrared and ambient-light sensors for recognizing application-specific gestures, such as turning on a display or adjustMicrosoft's Xbox Kinect is based on a PrimeSense reference design that uses two ing a volume level by drawing CMOS imagers (one for infrared and one for visible light) to sense 3-D depth so the a line in the air with a finger. p system can easily distinguish between players and background objects in the room. 28

Electronic Engineering Times January 17, 2011

Suggest Documents