
Google made a comment in its blog post saying “this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways.


Here is a static image of the flow of how this works: Here is a GIF of this in action but you should be able to try it yourself in English, in the United States: In this box you can add text to your photo query. Then you swipe up on the results to bring it up, and tap the “+ Add to your search” button. Then point the camera at something nearby or use a photo in your camera or even take a picture of something on your screen. Open the Google app on Android or iOS, click on the Google Lens camera icon on the right side of the search box. Google will then use both the image and the text query to show you visual search results. Google multisearch lets you use your camera’s phone to search by an image, powered by Google Lens, and then add an additional text query on top of the image search. Google says this lets searchers “go beyond the search box and ask questions about what you see.” Google multisearch is Google’s latest innovative search feature that let’s you search by image and then add text to that specific image search.
