Wednesday, May 21, 2025

Top 5 This Week

Related Posts

Google I/O 2025: Live updates on Gemini, Android XR, Android 16 updates and more

- Advertisement -

0:00

- Advertisement -

Ian Carlos Campbell

Google I/O has kicked off in Mountain View, California, and the initial keynote provided a laundry list of big AI announcements. A parade of Google execs starting with CEO Sundar Pichai took the stage to detail how the search giant is weaving its Gemini artificial intelligence model throughout its entire catalog of online services, as the company continues to vie for supremacy in the AI space with rivals like OpenAI, Microsoft and a host of others. Among the big announcements today: A new AI-powered movie creator called Flow; Virtual clothing try-ons enabled by photo uploads; real-time AI-powered translation is coming to Google Meet; Project Astra’s computer vision features are getting considerable more knowledgeable; Google’s Veo image and video generation options are getting a big upgrade and Google shared a live demo of real-time translation via its XR glasses, too. See everything announced at Google I/O for a full recap.

- Advertisement -

The initial keynote has ended, but there will be a developer-centric keynote thereafter (4:30PM ET / 1:30PM PT). If you want a recap of the event as it happened, scroll down to read our full liveblog, anchored by our on-the-ground reporter Karissa Bell and backed up by off-site members of the Engadget staff.

You can also rewatch Google’s keynote in the embedded video above or on the company’s YouTube channel, too. Note that the company plans to hold breakout sessions through May 21 on a variety of different topics relevant to developers.

Live266 updates

- Advertisement -
  • Stick around for Karissa’s hands-on impressions of some of the things Google announced today, by the way, including those new XR Glasses!

  • Thanks to everyone who tuned in for this wild AI ride with us.

    We have more stories about everything Google announced today on Engadget, so please check us out there.

  • Google’s AI Mode lets you virtually try clothes on by uploading a single photo

    A screenshot of the Shopping interface in AI Mode in Search, showing the results in response to a prompt

    Google

    As part of its announcements for I/O 2025 today, Google shared details on some new features that would make shopping in AI Mode more novel. It’s describing the three new tools as being part of its new shopping experience in AI Mode, and they cover the discovery, trying on and checkout parts of the process. These will be available “in the coming months” for online shoppers in the US.

    The first update is when you’re looking for a specific thing to buy. The examples Google shared were searches for travel bags or a rug that matches the other furniture in a room. By combining Gemini’s reasoning capabilities with its shopping graph database of products, Google AI will determine from your query that you’d like lots of pictures to look at and pull up a new image-laden panel.

    Read more: Google’s AI Mode lets you virtually try clothes on by uploading a single photo

  • Ok. Now that the AI deluge has slowed down. I can see why people are excited for this technology. But at the same time, it almost feels like Google is throwing features against a wall to see what sticks. There is A LOT going on, and trying to cover a million different applications in a two-hour presentation gets messy fast.

  • Aaaaaand it looks like that’s it. Whew!

  • Wifi has been slipping here since that AR glasses demo so I’m a bit late here, but it’s worth lingering on that for a moment. It’s been more than a decade since Google first tried augmented reality glasses with Google Glass. The tech (and our perception of it) has advanced so much since then, it’s great to see Google trying something so ambitious. So far this demo feels on par with what Meta showed with its Orion prototype last year (with a few bumps.)

  • Now Pichai is recounting a story about going in a driverless Waymo with his father and remembering how powerful this technology will be.

  • With the show wrapping, what did we think of the announcements Google made today?

  • FireSat just announced, which can detect fires as small as 270 square feet.

    FireSat just announced, which can detect fires as small as 270 square feet.

  • Pichai says Google is building something new called FireSat. It’s designed to more closely monitor fire breakouts, which is especially important to folks in places like California.

  • Google’s Veo 3 AI model can generate videos with sound

    Google

    Google

    As part of this year’s announcements at its I/O developer conference, Google has revealed its latest media generation models. Most notable, perhaps, is the Veo 3, which is the first iteration of the model that can generate videos with sounds. It can, for instance, create a video of birds with an audio of their singing, or a city street with the sounds of traffic in the background. Google says Veo 3 also excels in real-world physics and in lip syncing. At the moment, the model is primarily available for Gemini Ultra subscribers in the US within the Gemini app and for enterprise users on Vertex AI. (It’s also available in Flow, Google’s new AI filmmaking tool.)

    Read more: Google’s Veo 3 AI model can generate videos with sound

  • Everything announced at Google I/O today.

    Everything announced at Google I/O today.

  • Sundar Pichai is back, ostensibly to wrap up the show.

    Apparently Gemini got mentioned more times than AI itself during the keynote.

  • Google says its working hard to create a platform for these types of glasses, with retail devices available as early as later this year.

    Gentle Monster and Warby Parker are slated to be the first glasses makers to build something on the Android XR platform.

  • Gentle Monster and Warby Parker will be the first brands to build with Android XR.

    Gentle Monster and Warby Parker will be the first brands to build with Android XR.

  • It seems Google has tempted the live demo gods one too many times, because one pair of the glasses got confused and thought Izadi was speaking Hindi.

  • Live chat translation using the glasses.

    Live chat translation using the glasses.

  • Google Search Live will let you ask questions about what your camera sees

    Google I/O 2025

    One of the new AI features that Google has announced for Search at I/O 2025 will let you discuss what it’s seeing through your camera in real time. Google says more than 1.5 billion people use visual search on Google Lens, and it’s now taking the next step in multimodality by bringing Project Astra’s live capabilities into Search. With the new feature called Search Live, you can have a back-and-forth conversation with Search about what’s in front of you. For instance, you can simply point your camera at a difficult math problem and ask it to help you solve it or to explain a concept you’re having difficulty grasping.

    Read more: Google Search Live will let you ask questions about what your camera sees

  • Now Izadi is doing what he calls a risky demo by attempting to speak in Farsi to another presenter in real time with translation.

  • Of course, the glasses can take photos too.

Update, May 19 2025, 1:01PM ET: This story has been updated to include details on the developer keynote taking place later in the day, as well as tweak wording throughout for accuracy with the new timestamp.

Update, May 20 2025, 9:45AM ET: This story has been updated to include a liveblog of the event.

Update, May 20 2025, 2:08PM ET: This story has been updated to include an initial set of headlines coming out of the I/O keynote.

Update, May 20 2025, 4:20PM ET: Added additional headlines and links based on the flow of news from the initial keynote.

Read More

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles