Google is working on Google Visual Search, a mobile application that lets users take a picture of a location from their Android-powered smartphone and trigger a Google search that pulls up information associated with the image.Neven Vision, a company acquired by Google in 2006, had several patents on mobile visual search and object recognition, including a patent for an "image-based search engine for mobile phones with camera":
"Imagine you're a tourist and you arrive at this place and you would like to know more about it, all you will have to do is take a shot of the [Santa Monica pier] sign and you see we recognized this as the Santa Monica pier," [Google Product Manager Hartmut] Neven said.
However, the technology, known internally as Google Goggles, didn't pass muster when Google tested it with a focus group in August. The company's engineers are working out the bugs and building out the immense database required to propel the technology.
The present invention may be embodied in an image-based information retrieval system that includes a mobile telephone and a remote server. The mobile telephone has a built-in camera, a recognition engine for recognizing an object or feature in an image from the built-in camera, and a communication link for requesting information from the remote server related to a recognized object or feature.Last year, Google launched an iPhone app that allowed you to to do a Google search using your voice. Obtaining some search results just by uploading a picture brings Google even closer to the real world.
In more detailed features of the invention, the object may be an advertising billboard and the related information may be a web page address. Alternatively, the object may be a car and the related information may be a car manual. Also, the object may be a product and the related information may be a payment confirmation. Further, the object may be a book and the related information may be an audio stream.
Image licensed as Creative Commons Attribution by Mac Funamizu.
Update: Google Goggles is now available in Google Labs. If you have an Android phone, go to the Android Market app and search for "Google Goggles".
personally i think it would be incredible if it could identify scientific classifications, such as a tree genus, or a plant by leaf, an animal by shape and colors, a rock by patterns. i realize that might be somewhat far fetched, but imagine what it'd do for your iphone (etc.).
ReplyDeleteit'd just need enough photos, a library etc. it'd be an incredible aid.
Another step closer to the tricorder
ReplyDeleteThere's a google blackberry app that let's you search with your voice too.
ReplyDeleteI have seen this iphone application in use and its work very well!
ReplyDeleteTom Kadwill
Tekrux
Check out apps like PlinkArt, SnapTell and others. They already do this for some types of objects. e.g. PlinkArt does it for paintings, SnapTell does it for books.
ReplyDeleteIt is a real implementation of spacial data mining. It is a tough task if an organisation is trying to develop a database. It is only possible if real navigation is present from satellite. Because world is dynamic and real scenes are going to change, updating continuous is costly and impossible for through out the world.
ReplyDeleteConcept is good but it take time to develop and implement for the globe. It is possible by cluster analysis.
Take a picture of a painting, see the result information and have it play a recording by the artist or some expert. Take a picture of a machine part, get the measurements and have a supplier send you a price and delivery time, great for mechanics.
ReplyDeleteis this US only as I cant find it in the UK market
ReplyDeleteYet another app that does has similar features: "eyeBuy Visual Search" for the iPhone (Goggles is not available on the iPhone). Covers more than SnapTell and others like it.
ReplyDelete