Google just announced some nifty improvements to its Gemini AI chatbot for Android devices as part of the company’s I/O 2024 event. The AI is now part of the Android operating system, allowing it to integrate in a more comprehensive way.
The coolest new feature wouldn’t be possible without that integration with the underlying OS. Gemini is now much better at understanding context as you control apps on the smartphone. What does this mean exactly? Once the tool officially launches as part of Android 15, you’ll be able to bring up a Gemini overlay that rests on top of the app you’re using. This will allow for context-specific actions and queries.
Google gives the example of quickly dropping generated images into Gmail and Google Messages, though you may want to steer clear of historical images for now. The company also teased a feature called “Ask This Video” that lets users pose questions about a particular YouTube video, which the chatbot should be able to answer. Google says this should work with “billions” of videos. There’s a similar tool coming for PDFs.
It’s easy to see where this tech is going. Once Gemini has access to the lion’s share of your app library, it should be able to actually deliver on some of those lofty promises made by rival AI companies like Humane and Rabbit. Google says it’s “just getting started with how on-device AI can change what your phone can do” so we imagine future integration with apps like Uber and Doordash, at the very least.
Circle to Search is also getting a boost thanks to on-board AI. Users will be able to circle just about anything on their phone and receive relevant information. Google says people will be able to do this without having to switch apps. This even extends to math and physics problems, just circle for the answer, which is likely to please students and frustrate teachers.
Catch up on all the news from Google I/O 2024 right here!
Credit: Source link