In the beginning of 2009 I was investigating a couple of different business ideas. I did read a lot about wearable computer systems, and had some serious thoughts about building and selling such systems. But I instead ended up with some home automation ideas, that I’m going to work on. Below is a draft article, with my notes from this period. A lot of text is copied from Internet, I did use them as notes with the intention to rewrite the text.
This document will describe a vision of a future wearable computer system. The system will make it possible for a range of new software and the way software is used. By time this system can replace mobile phones, hand-, home- and office computers, and other client devices. The most basic functionality is that the user will always have access to all her normal applications, such as spreadsheet, email, presentation, Internet browser. To the more advanced software we have Virtual Reality and Augmented Reality.
Where virtual reality is a totally computer-generated environment like those seen in video games. Augmented reality is much closer to the real world. The basic layout for a augmented reality system is the real world, to which a computer adds graphics, sounds, feelings and smells. The easiest way to understand augmented reality might be if you picture yourself walking down a street. You are wearing something that looks like a normal pair of glasses, that works as a computer screen. In your pocket is a small computer, big as a normal mobile phone. The street you follow are lit up by a green field, and an animated arrow point to the direction you should go. Every 5 meter has it’s own arrow, and floating in the air above is the estimated number of meters until you have reached your goal.
If you turn your head down to the right, you can see a couple of computer windows floating in the air. The one currently most interesting for you includes a map, indicating where you are now, where you came from, and to where you should go. The path you should go is marked by a green line.
You start to feel the need for coffee, and looks at another window down to the right. It displays your calender, which shows that the meeting starts in 15 minutes. That gives time to pick up something to drink on the way. You make a couple of choices in the map window menu with your right hand, and the map shows where the 4 nearest cafés can be found. You are lucky, one of them is just two blocks down the road. You don’t even need to change the path to your meeting. The web-page of the café is easily opened and you make an order of a double espresso that should be ready in 4 minutes, and while you are walking the payment are automatically done with you credit card.
You reach the café a couple of minutes later and can see four coffees on the bar. Above one of them is your name floating.
The hardware vision will contain three different options. What is available today, tomorrow and in the future. The definition of tomorrow is 1-10 years. This kind of technology is probably already in labs all over the world, and is highly likely possible to create. The future hardware system is beyond 10 years, and something the author would like the future to make available for us.
This chapter will contain the hardware that is available today, either by buying directly or building yourself.
Just as monitors allow us to see text and graphics generated by computers, head-mounted displays (HMDs) will enable us to view graphics and text created by augmented-reality systems. So far, there haven’t been many HMDs created specifically with augmented reality in mind. Most of the displays, which resemble some type of skiing goggles, were originally created for virtual reality. There are two basic types of HMDS: * video see-through * see-through
Video see-through displays block out the wearer’s surrounding environment, using small video cameras attached to the outside of the goggles to capture images. On the inside of the display, the video image is played in real-time and the graphics are superimposed on the video. One problem with the use of video cameras is that there is more lag, meaning that there is a delay in image-adjustment when the viewer moves his or her head.
Most companies who have made optical see-through displays have gone out of business. Sony makes a see-through display that some researchers use, called the Glasstron. Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, believes that the Microvision’s Virtual Retinal Display holds the most promise for an augmented-reality system. This device actually uses light to paint images onto the retina by rapidly moving the light source across and down the retina. The problem with the Microvision display is that it currently costs about $10,000. MacIntyre says that the retinal-scanning display is promising because it has the potential to be small. He imagines an ordinary-looking pair of glasses that will have a light source on the side to project images on to the retina.
Copied from howstuffworks
The basic idea of augmented reality is to superimpose graphics, audio and other sense enhancements over a real-world environment in real-time. Sounds pretty simple. Besides, haven’t television networks been doing that with graphics for decades? Well, sure — but all television networks do is display a static graphic that does not adjust with camera movement. Augmented reality is far more advanced than any technology you’ve seen in television broadcasts, although early versions of augmented reality are starting to appear in televised races and football games, such as Racef/x and the super-imposed first down line, both created by SporTVision. These systems display graphics for only one point of view. Next-generation augmented-reality systems will display graphics for each viewer’s perspective.
Augmented reality is still in an early stage of research and development at various universities and high-tech companies. Eventually, possibly by the end of this decade, we will see the first mass-marketed augmented-reality system, which one researcher calls “the Walkman of the 21st century.” What augmented reality attempts to do is not only superimpose graphics over a real environment in real-time, but also change those graphics to accommodate a user’s head- and eye- movements, so that the graphics always fit the perspective.
Instant information – Tourists and students could use these systems to learn more about a certain historical event. Imagine walking onto a Civil War battlefield and seeing a re-creation of historical events on a head-mounted, augmented-reality display. It would immerse you in the event, and the view would be panoramic.
Send email to these to get more information.
http://www.lumusvision.com/ (Eyeglasses, good looking)
http://www.microvision.com/wearable.html (laser glasses)
http://www.kopin.com/low-volume-pricing/ (Creater of eyeglasses lenses)
Devices that can replace keyboard and mouse.
I used OpenCV to program this. I used CAMSHIFT algo to track my hand. To segment the hand, I depended on skin color segmentation. I used online training -before the program starts, user places the hand in predefined location and program picks up the skin color or you can use OpenCV’s face detection to detect face and pick skin color from face.
To identify pointed finger and mouse click (thumb movement), I got the blob of my hand & checked the shape – was a crude algo, better use Neural Net etc