At the end of Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company said “AI” 121 times. That was essentially the gist of Google’s two-hour keynote — putting AI into every Google app and service used by more than two billion people around the world. Here are all the major updates Google announced at the event.
Gemini 1.5 Flash and Gemini 1.5 Pro updates
Google announced A new AI model called Gemini 1.5 Flash, optimized for speed and efficiency. Flash sits between the Gemini 1.5 Pro and the Gemini 1.5 Nano, which is the company’s smallest model running natively on the device. Google says it created Flash because developers wanted a lighter and cheaper model than Gemini Pro to create AI-powered apps and services, but it also retains some of the things that set Gemini Pro apart from competing models, such as a long context window of one million tokens he said. Later this year, Google will increase Gemini’s context window to two million tokens, meaning it will be able to simultaneously process two hours of video, 22 hours of audio, more than 60,000 lines of code, or more than 1.4 million words. .
Project Astra
Google demonstrated As Google’s DeepMind CEO Demis Hassabis said, Project Astra, the first version of an AI-powered universal assistant, is Google’s version of an AI agent that “can be useful in everyday life.”
In the video, which Google says was shot in a single shot, an Astra user moves around Google’s London office with the phone held up and points the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it’s looking like. In one of the most touching moments of the video, the user correctly tells the user where he had placed his glasses before, without ever lifting them.
The video ends with a twist — when the user finds and puts on the missing glasses, we learn that they have onboard camera system and they can use Project Astra to seamlessly chat with the user, perhaps indicating that Google is working on a competitor to the Meta’s Ray Ban smart glasses.
Ask Google Photos
Google Photos was already smart when it came to searching for specific photos or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, when the feature rolls out in the next few months, you’ll be able to ask Google Photos a complex question like “show me the best photo from every national park I’ve visited.” Google Photos will use GPS data as well as its own judgment of what is “best” to provide you with options.
Veo and Imagen 3
Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s OpenAI’s answer to Sora. Google said it can produce “high-quality” 1080p videos that “can last more than a minute” and can understand cinematic concepts like timelapse.
Meanwhile, Imagen 3 is a text-to-image generator that Google claims handles text better than its predecessor, Imagen 2. The result is the company’s highest-quality” text-to-image model with an “incredible level of detail.” photorealistic, lifelike images” and fewer artifacts – actually putting it up against OpenAI’s DALLE-3.
Big updates to Google Search
Google does make big changes How search works in principle. Most of the updates announced today include the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details about their promotions and walking time from Beacon Hill.”) and using Search to plan meals and earned vacations . It won’t be available unless you sign in to Search Labs, the company’s platform that lets people try out experimental features.
But a big new feature Google calls AI Insights, which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now serve up AI-generated answers on top of results by default, and the company says it will roll out the feature to more than a billion users worldwide by the end of the year.
Gemini on Android
Google does Integration of twins directly to Android. When Android 15 is released later this year, Gemini will be aware of the app, photo or video you’re running, and you’ll be able to overlay it and ask contextual questions. Where does this leave Google Assistant, which already does this? Who knows! Google didn’t bring it up at all in today’s keynote.
There were a number of other updates as well. Google said it would add digital watermarks to AI-generated video and text, Make the twins available Power on the sidebar in Gmail and Docs a virtual AI teammate Listen to phone calls in Workspace and discover that you have been deceived in real time and more.
Stay up-to-date with all the news from Google I/O 2024 here!