Multisearch could make Google Lens your search sensei

Google searches are about to get even more precise with the introduction of multiple search, a combination of text and image search with Google Lens.

After performing an image search through Lens, you will now be able to ask additional questions or add parameters to your search to narrow the results. Google’s use cases for this feature include buying clothes with a particular pattern in different colors or pointing your camera at a bike wheel and then typing in “how to fix” to see guides and bike repair videos. According to Google, the best use case for multiple search, so far, is for shopping results.

The company is rolling out the beta version of this feature to US users of the Google app on Android and iOS platforms on Thursday. Simply click the camera icon next to the microphone icon or open a photo from your gallery, select what you want to search for, and swipe up on your results to see an “Add to Search” button ” where you can enter additional text.

This announcement is a public trial of the feature the search giant has been teasing for nearly a year; Google discussed the feature when introducing MUM to Google I/O 2021then provided more information about it in September 2021. MUM, or Multitask Unified Model, is Google’s new AI model for search that was revealed at the company’s I/O event the same year.

MUM replaced the old AI model, BERT; Bidirectional encoder representations from transformers. MUM, according to Google, is about a thousand times more powerful than BERT.

See also  Computex 2022: what to expect from this year's show

Google Lens Multiple Search

(Image credit: Google)

Analysis: Will it be worth anything?

It’s in beta at the moment, but Google certainly hyped MUM when it was announced. From what we’ve seen, Lens is generally pretty good at identifying objects and translating text. However, the AI ​​improvements will add another dimension to it and might make it a more useful tool for finding the information you need about what you’re watching right now, as opposed to general information about something. to like this.

However, this raises questions about the ability to specify exactly what you want. For example, if you see a sofa with a striking pattern on it but prefer to have it as a chair, will you be able to reasonably find what you want? Will it be in a physical store or an online storefront like WayFair? Google searches can often get inaccurate physical inventories of nearby stores, are these also improving?

We have a lot of questions, but they’ll probably only be answered when people start using multiple search. The nature of AI is to improve with use, after all.

Leave a Comment