Monday, November 25, 2024
Technology

Google’s A.I.-powered ‘multisearch,’ which combines text and images in a single query, goes global

Amid other A.I.-focused announcements, Google today shared that its newer “multisearch” feature would now be available to global users on mobile devices, anywhere that Google Lens is already available. The search feature, which allows users to search using both text and images at the same time, was first introduced last April as a way to modernize Google search to take better advantage of the smartphone’s capabilities. A variation on this, “multisearch near me,” which targets searches to local businesses, will also become globally available over the next few months, as will multisearch for the web and a new Lens feature for Android users.

As Google previously explained, multisearch is powered by A.I. technology called Multitask Unified Model, or MUM, which can understand information across a variety of formats, including text, photos, and videos, and then draw insights and connections between topics, concepts, and ideas. Google put MUM to work within its Google Lens visual search features, where it would allow users to add text to a visual search query.

“We redefined what we mean to search by introducing Lens. We’ve since brought Lens directly to the search bar and we continue to bring new capabilities like shopping and step-by-step homework help,” Prabhakar Raghavan, Google’s SVP in charge Search, Assistant, Geo, Ads, Commerce and Payments products, said at a press event in Paris.

For example, a user could pull up a photo of a shirt they liked in Google Search, then ask Lens where they could find the same pattern, but on a different type of apparel, like a skirt or socks. Or they could point their phone at a broken piece on their bike and type into Google Search a query like “how to fix.” This combination of words and images could help Google to process and understand search queries that it couldn’t have previously handled, or that would have been more difficult to input using text alone.

The technique is most helpful with shopping searches, where you could find clothing you liked, but in different colors or styles. Or you could take a photo of a piece of furniture, like a dining set, to find items that matched, like a coffee table. In multisearch, users could also narrow and refine their results by brand, color, and visual attributes, Google said.

The feature was made available to U.S. users last October, then expanded to India in December. As of today, Google says multisearch is available to all users globally on mobile, in all languages and countries where Lens is available.

The “multisearch near me” variation will also soon expand, Google said today.

Google announced last May it could be able to direct multisearch queries to local businesses (aka “multisearch near me”), which would return search results of the items users were looking for that matched inventory at local retailers or other businesses. For instance, in the case of the bike with the broken part, you could add the text “near me” to a search query with a photo to find a local bike shop or hardware shop that had the replacement part you needed.

This feature will become available to all languages and countries where Lens is available over the next few months, Google said. It will also expand beyond mobile devices with support for multisearch on the web in the coming months.

In terms of new search products, the search giant teased an upcoming Google Lens feature, noting that Android users would soon be able to search what they see in photos and videos across apps and websites on their phone, while still remaining in the app or on the website. Google is calling this “search your screen,” and said it will also be available wherever Lens is offered.

Google shared a new milestone for Google Lens, too, noting that people now use the technology more than 10 billion times per month.

 

source

Leave a Reply

Your email address will not be published. Required fields are marked *