Google is adding live video and screen-sharing features to its Gemini AI assistant. The features will allow users to interact with the assistant in real time. 

With this update, users can use live video to show Gemini their surroundings and get real-time responses. They can also share their screen with the AI assistant to ask questions about what they’re viewing or browsing using visual context.

The company announced the update at Mobile World Congress (MWC) in Barcelona. Google first hinted at Gemini’s ability to "see" in August 2024. The company says the features will start rolling out later this month to Gemini advanced subscribers as part of the Google One AI Premium plan on Android devices.

At MWC, Google demonstrated the new features. The video showed a user shopping for baggy jeans, asking Gemini what clothing would pair well with them. 

Instead of relying on text or voice descriptions, the AI analyzed the product directly from the screen and provided style suggestions.

The new updates align with Google's “Project Astra”, focused on making AI assistants more interactive and multimodal. While Gemini already supports text and voice inputs, these new features bring a visual element.

 

Industry's View
Stories like this, in your inbox every Wednesday
Our 1x weekly, bite-sized newsletter will give you everything you need to know
in the world of marketing:
HOME PAGE