google-deepmind-preview

AI assistants are evolving with new sensory capabilities. On Monday, OpenAI unveiled an upgraded ChatGPT model that can see, hear, and speak via smartphones, among other enhancements. Not to be outdone, Google is rolling out a competing assistant with similar functionalities.

At the I/O developer conference on Tuesday, DeepMind CEO Demis Hassabis showcased a prototype of Google’s advanced AI assistant. This assistant can not only see through a user’s phone but also interact with other devices like smart glasses. Google explained that this new assistant builds upon Gemini, their current chatbot, and its features will soon be integrated into the Gemini app and web platforms later this year.

This innovation is part of Google DeepMind’s Project Astra, which aims to develop “a universal AI agent” for everyday tasks. “Imagine a future where you have an expert assistant available through your phone or innovative devices like smart glasses,” Hassabis told an audience of thousands in Mountain View, California.

A demonstration video depicted a person conversing with an AI agent via their phone while strolling through an office. They showed the AI a container of crayons through the camera and requested a “creative alliteration.”

“Creative crayons color cheerfully,” the AI replied. “They certainly craft colorful creations.” The user continued chatting with the AI, then realized they had misplaced their glasses and sought assistance. “They’re on the desk near a red apple,” the bot responded.

When the user donned those glasses, the AI assistant could see through them too, identifying an illustration of Schrödinger’s cat on a whiteboard.

It remains unclear if these glasses are a new product Google plans to launch. The augmented reality glasses shown in the demo didn’t resemble Google Glass or typical, bulkier headsets.

“An assistant of this kind must understand and interact with our complex and ever-changing world just as we do,” Hassabis said at the conference. “It must absorb and retain what it sees to understand context and act accordingly. It must also be proactive, teachable, and personalized, enabling natural and seamless conversations.”

That’s the objective of Project Astra, he added, noting that it is making “significant progress.”

While Google’s prototype AI assistant is available for demonstration at the I/O conference, it will likely take some time before this technology becomes accessible to the general public.