Google I/O 2025: 100 things Google announced

Share This Post


69. Over 7 million developers are building with Gemini, five times more than this time last year.

70. Gemini usage on Vertex AI is up 40 times compared to this time last year.

71. We’re releasing new previews for text-to-speech in 2.5 Pro and 2.5 Flash. These have first-of-its-kind support for multiple speakers, enabling text-to-speech with two voices via native audio out. Like Native Audio dialogue, text-to-speech is expressive, and can capture really subtle nuances, such as whispers. It works in over 24 languages and seamlessly switches between them.

72. The Live API is introducing a preview version of audio-visual input and native audio out dialogue, so you can directly build conversational experiences.

73. Try it now! Jules is a parallel, asynchronous agent for your GitHub repositories to help you improve and understand your codebase. It is now open to all developers in beta. With Jules you can delegate multiple backlog items and coding tasks at the same time, and even get an audio overview of all the recent updates to your codebase.

74. Gemma 3n is our latest fast and efficient open multimodal model that’s engineered to run smoothly on your phones, laptops, and tablets. It handles audio, text, image, and video. The initial rollout is underway on Google AI Studio and Google Cloud with plans to expand to open-source tools in the coming weeks.

75. Try it now! Google AI Studio now has a cleaner UI, integrated documentation, usage dashboards, new apps, and a new Generate Media tab to explore and experiment with our cutting-edge generative models, including Imagen, Veo and native image generation.

76. Colab will soon be a new, fully agentic experience. Simply tell Colab what you want to achieve, and watch as it takes action in your notebook, fixing errors and transforming code to help you solve hard problems faster.

77. SignGemma is an upcoming open model that translates sign language into spoken language text, (best at American Sign Language to English), enabling developers to create new apps and integrations for Deaf and Hard of Hearing users.

78. MedGemma is our most capable open model for multimodal medical text and image comprehension designed for developers to adapt and build their health applications, like analyzing medical images. MedGemma is available now for use now as part of Health AI Developer Foundations.

79. Stitch is a new AI-powered tool to generate high-quality UI designs and corresponding frontend code for desktop and mobile by using natural language descriptions or image prompts.

80. Try it now! We announced Journeys in Android Studio, which lets developers test critical user journeys using Gemini by describing test steps in natural language.

81. Version Upgrade Agent in Android Studio is coming soon to automatically update dependencies to the latest compatible version, parsing through release notes, building the project and fixing any errors.

82. We introduced new updates across the Google Pay API designed to help developers create smoother, safer, and more successful checkout experiences, including Google Pay in Android WebViews.

83. Flutter 3.32 has new features designed to accelerate development and enhance apps.

84. And we shared updates for our Agent Development Kit (ADK), the Vertex AI Agent Engine, and our Agent2Agent (A2A) protocol, which enables interactions between multiple agents.

85. Try it now! Developer Preview for Wear OS 6 introduces Material 3 Expressive and updated developer tools for Watch Faces, richer media controls and the Credential Manager for authentication.

86. Try it now! We announced that Gemini Code Assist for individuals and Gemini Code Assist for GitHub are generally available, and developers can get started in less than a minute. Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist, features advanced coding performance; and helps developers excel at tasks like creating visually compelling web apps, along with code transformation and editing.

87. Here’s an example of a recent update you can explore in Gemini Code Assist: Quickly resume where you left off and jump into new directions with chat history and threads.

88. Firebase announced new features and tools to help developers build AI-powered apps more easily, including updates to the recently launched Firebase Studio and Firebase AI Logic, which enables developers to integrate AI into their apps faster.

89. We also introduced a new Google Cloud and NVIDIA developer community, a dedicated forum to connect with experts from both companies.

Work smarter with AI enhancements



Source link

Related Posts

- Advertisement -spot_img