From Circle to Commerce
When Google introduced Circle to Search in January 2024, it debuted as an elegant solution to a persistent problem: the friction involved in searching for something you see on your phone screen. Rather than screenshotting an image, switching to a browser, opening Google Lens, and uploading the screenshot, Circle to Search allowed users to simply long-press the home button and draw a circle around whatever they wanted to search — a piece of text, an image, a product, a face — without leaving the app they were in. The feature has since expanded to tens of millions of Android devices and has been cited by Google as one of the most successful AI-powered features it has shipped in the past two years.
Now Google is deepening Circle to Search's capabilities in a direction that will significantly expand its utility and, not coincidentally, its commercial potential. A set of new updates adds what Google is calling visual intelligence features: the ability to recognize and search for specific fashion items, home decor products, and consumer goods from within images — and to surface shoppable results that allow users to find where those items are sold, at what price, and in what configurations. Combined with a new ability to see the whole picture in visual search — understanding the spatial and contextual relationships between objects in a scene — the updates represent a significant expansion of what Circle to Search can do.
Fashion Search: The Lead Use Case
The fashion identification capability is the most immediately user-facing of the new features. Using it, a user can circle a piece of clothing in an Instagram post, a Pinterest pin, a website image, or even a photo they have taken with their camera, and receive results that identify the specific item (when it is a recognizable product), visually similar items from multiple retailers, and information about current pricing and availability. The system uses Google's visual embedding models — the same technology that underlies Google Lens's product search — but integrated natively into the Circle to Search interface and expanded to handle partial views, varying lighting conditions, and items that are partially obscured.
The practical use case is one that fashion-conscious consumers will recognize immediately: seeing something someone is wearing, wanting to find it or something similar, and facing the tedious process of trying to describe it in text search terms. Circle-to-search for fashion collapses that friction entirely. The accuracy of identification varies by how distinctive the item is — a very specific designer piece with recognizable branding or details is more easily identified than a generic solid-color t-shirt — but Google's extensive training data across billions of product images gives the system a broad recognition base.
Home Decor and Product Recognition
The same visual recognition capabilities extend to home decor and consumer electronics, categories in which users frequently encounter items in photographs — editorial content, social media posts, real estate listings — and want to find them for purchase. Identifying a specific lamp, a particular rug pattern, or a model of television from a room photograph has historically been a difficult problem for image search systems because these items often appear at angles, under varied lighting, and in partial views that make exact identification challenging.
Google's updated models handle these scenarios more gracefully by reasoning about the object within its scene context rather than trying to match it as an isolated product image. The system understands that an object in the background of a room photograph is likely furniture or decor, brings that prior into the recognition process, and surfaces results that account for the viewing angle and lighting conditions rather than requiring a clean catalog-style image for accurate identification.
The Commercial Dimension
It would be naive to analyze these updates without acknowledging their commercial dimension. Google's core advertising business depends on connecting user intent with commercial opportunities, and visual search represents an enormous untapped surface area for that connection. When a user circles a product in an image, that is an expression of purchase intent that is more specific and actionable than most text searches. The ability to immediately surface shoppable results from that intent — and to do so inside apps where users are already engaged rather than requiring them to navigate to Google — is enormously valuable from an advertising and commerce perspective.
Google Shopping has been a significant revenue contributor for years, and the integration of Circle to Search with shopping results essentially turns any image on an Android device into a potential commerce touchpoint. The company is careful to present this as a user benefit — finding what you want easily — and for most use cases, that framing is accurate. But the alignment between user convenience and Google's commercial interests is not coincidental, and it is worth noting that the visual AI improvements that most directly enable commerce are the ones receiving the most prominent placement in Google's product announcements.
Looking Forward
The Circle to Search updates are part of a broader evolution of Google's on-device AI capabilities. As Gemini Nano and related models become capable of running increasingly sophisticated tasks directly on mobile hardware, features that previously required sending data to Google's servers can be executed locally, with implications for both latency and privacy. Google has indicated that some Circle to Search visual processing will move toward on-device execution as model efficiency improves, which would allow the feature to work offline and would reduce the data transfer associated with visual searches. For now, the combination of cloud intelligence and on-device execution gives Circle to Search a capability profile that is difficult for competitors to match without access to Google's scale of training data and infrastructure.
This article is based on reporting by Google AI Blog. Read the original article.

