"Superior performance in the detection challenge requires pushing beyond annotating an image with a 'bag of labels' - a model must be able to describe a complex scene by accurately locating and identifying many objects in it," explains Google. Here's are some examples of object detection:
"This effort was accomplished by using the DistBelief infrastructure, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture," mentions Google. The goal is to train large models for deep neural networks.
Last year, Google used the DistBelief infrastructure to improve some models used by the winning team at ImageNet and implemented the algorithms in Google+ Photos Search and later in Google Drive's search engine. Google automatically annotates images and it allows you to search for things like "car" or "laptop" and find images that include them.
Google promises to use the latest achievements to improve "Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand what is in an image as well as where things are".