Google Trends Netnews360

Google Gemini 2.0 announced with multimodal image and audio output, agentic AI features

Google has unveiled Gemini 2.0 – the latest generation of its AI model, which now supports image and audio output and tool integration for the ‘agentic age’. Agentic AI models represent AI systems that can independently perform tasks with adaptive decision making. Consider automating tasks such as shopping or scheduling an appointment from a prompt.

Gemini 2.0 will include multiple agents that can help you in everything from giving you real-time suggestions in games like Clash of Clans to choosing a gift and adding it to your cart based on a prompt.

Like other AI agents, those in Gemini 2.0 exhibit goal-oriented behavior. They can create a task-based list of steps and complete them independently. Agents in Gemini 2.0 include Project Astra, designed as a universal AI assistant for Android phones and with multimodal support and integration of Google Search, Lens and Maps.

Project Mariner is another experimental AI agent that can navigate independently in a web browser. Mariner is now available in early preview form for “trusted testers” as a Chrome extension.

Outside of the AI ​​agents, Gemini 2.0 Flash is the first version of Google’s new AI model. It is an experimental (beta) version for now with lower latency, better benchmark performance, and improved reasoning and understanding in math and coding compared to the Gemini 1.0 and 1.5 models. It can also generate images natively powered by Google DeepMind’s Imagen 3 text-to-image model.

Gemini 2.0 Flash is Experimental available on the internet for all users and coming soon to the Gemini mobile app. Users who want to test it should select the Gemini 2.0 Flash Experimental from the drop-down menu




Gemini 2.0 Flash Experimental on the Internet

Developers can also access the new model through Google AI Studio and Vertex AI. Google has also confirmed that it will announce more Gemini 2.0 model sizes in January.

Source

Exit mobile version