Google I/O 2025 Recap: All the Major Announcements and Highlights
By Hossamudin Hassan (@ePreneurs)

TL;DR
- AI takes center stage: Google unveiled major advances in its Gemini AI models and Google DeepMind research, embedding powerful generative AI across Search, Workspace, Android and more. A new Google AI Ultra subscription offers top-tier AI access, while an experimental “Agent Mode” hints at a future universal AI assistant.
- Android ecosystem evolves: Android is expanding beyond phones – Google introduced an Android XR platform for AR/VR experiences, previewed Wear OS 6 for smartwatches, and baked generative AI into app development tools. Meanwhile, Google Play is improving app quality and engagement with new developer tools and storefront features.
- New hardware & platforms: Google teased futuristic devices and platforms. Pixel 9a launched as an affordable AI-packed phone, and Google gave a sneak peek of Samsung’s Project Moohan VR headset and prototype Android XR smart glasses for AR. Project Starline evolved into Google Beam, an AI-driven 3D telepresence platform for ultra-realistic video calls.
- Tools for developers: From Firebase to Flutter, Google rolled out updates to empower developers. Flutter 3.32 adds faster UI development features, and Firebase’s new AI extensions make it easier to build intelligent apps. Google also open-sourced Gemma AI models for mobile, medical, and accessibility use cases, and launched coding assistants like Gemini Code Assist and Jules to automate software tasks.
- Core apps get smarter: Google Search is becoming conversational and “agentic” with AI Mode and interactive features in Search Labs. Google’s core products are infused with AI: think smarter Google Maps directions in AR glasses, Photos that edit themselves, Gmail that drafts replies in your personal style, and even near real-time translation in Google Meet calls. In short, Google I/O 2025 showcased an AI-everywhere future.
Stylized Google I/O 2025 logo imagery, reflecting Google’s all-in embrace of AI and innovation this year.
Artificial Intelligence: Gemini, Generative AI and DeepMind
- Gemini 2.5 gets smarter and more secure: Google announced that its latest flagship AI model Gemini 2.5 Pro is now topping global AI benchmarks for web reasoning and learning tasks. A new experimental “Deep Think” mode can be enabled for Gemini 2.5 Pro to tackle highly complex math and coding problems with improved reasoning. Under the hood, Google DeepMind also strengthened Gemini’s safety – a new security approach significantly improved the model’s defenses against prompt injection attacks, making Gemini 2.5 Google’s most secure model family to date. In practical terms, this means developers and users can trust Gemini with more sensitive tasks as it gets both smarter and safer.
- Gemini app becomes more personal and agentic: The Gemini AI app (Google’s consumer AI assistant app) is getting major upgrades to act more like an intelligent personal tutor and helper. It can now generate interactive practice quizzes on any topic to help you study. Coming soon, Gemini will connect with your favorite Google apps – Calendar, Maps, Tasks, Keep – so it can take actions during a chat (e.g. add an event to your calendar or give you info about a location) without you lifting a finger. The app is also rolling out camera and screen-sharing abilities, meaning you can show Gemini what you see through your phone camera or share your screen and have a conversation about it. Perhaps the most futuristic feature is an upcoming “Agent Mode,” where you simply describe a goal and Gemini will execute tasks on your behalf autonomously. (Early demos showed it making appointments and bookings via text.) This could turn Gemini into a true universal AI assistant – a vision Google’s CEO Sundar Pichai and DeepMind’s Demis Hassabis emphasized on stage. While still experimental, these updates indicate Gemini is evolving to be more proactive, contextual, and capable – a potential answer to rival AI assistants like ChatGPT.
- A “world model” AI and Project Astra: Google hinted at the next phase of AI assistants with Project Astra, a research prototype of a “world model” AI that can plan and take actions in the real world. By combining Gemini’s multimodal understanding (vision, audio, text) with memory and tool use, Astra can observe your environment and carry out multi-step tasks. A demo showed Astra’s capabilities as a conversational tutor that could walk a student through homework problems step-by-step, even drawing diagrams to explain concepts. Another demo assisted a visually impaired user by describing their surroundings and helping with errands. These prototypes – essentially early universal AI assistants – will inform future Gemini upgrades. Google says some Astra innovations (like more natural voice output and improved computer control) will make their way to Gemini Live for users and developers later this year. They even teased integration with upcoming Android XR glasses, showing Gemini handling real-world tasks like translating a conversation in real time via AR subtitles. It’s clear Google’s goal is an AI that is always available, understands your context, and can act across both the digital and physical world – though significant work remains to reach that vision.
- New generative AI tools for creativity: Google I/O 2025 brought a creative flair with the debut of advanced generative media models. Veo 3, Google’s latest text-to-video model, can now generate short videos with audio from a simple prompt and is available to try in the Gemini app for subscribers. An updated Imagen 4 model for image generation produces remarkably detailed images (even handling tricky elements like human skin or text in images) at up to 2K resolution. Imagen 4 is already live in the Gemini app and will soon offer a faster mode that’s 10× quicker than before. To help people harness these models, Google introduced Flow, a new AI-powered filmmaking tool that lets creators “weave” together cinematic videos by describing characters, scenes and styles. With Flow, even those without editing skills can produce visually striking movies by tapping Google DeepMind’s best generative models for video and imagery. (Flow is launching first to Google AI Pro/Ultra subscribers in the U.S.) These tools underscore Google’s push to empower creativity through AI – imagine designing videos, art, and music with just your imagination and an AI collaborator. In fact, Google highlighted a partnership with filmmaker Darren Aronofsky’s startup Primordial Soup, which is using these generative models (like Veo) to produce a series of short films. One of the first AI-assisted films from this effort will premiere at the Tribeca Festival, signaling that AI-generated media has arrived on the creative stage.
- AI for everyone – new subscription plans: To get these cutting-edge AI features to users (and generate revenue), Google announced new Google AI subscription tiers. Google AI Ultra is a premium plan that grants the highest access limits to Google’s most advanced models (like Gemini 2.5 Pro) and features such as Veo 3 and Flow. It also bundles 30 TB of cloud storage and YouTube Premium in one package. Ultra isn’t cheap at $249.99/month (with a 50% off intro offer) – clearly targeting businesses or power-users – but it essentially gives subscribers the full power of Google’s AI across products. For consumers, there’s also Google AI Pro at $19.99/month, offering a suite of AI tools with more generous limits than the free tier. Notably, Google is also giving college students free upgrades to standard Gemini features for a school year in select countries. These plans show Google’s confidence that some users will pay for premium AI access. The move also raises questions for the ecosystem: Will advanced AI become a paid luxury, or gradually trickle down to free services? Regardless, the subscription model indicates Google’s AI is moving out of the lab and into a product phase, where usage can be monetized (similar to how OpenAI offers paid tiers for ChatGPT).
Android and Mobile Ecosystem
- Generative AI comes to Android apps: Google is making it easier for Android developers to build intelligent, personalized apps using generative AI. At I/O, they unveiled new ML Kit GenAI APIs that let apps run Gemini Nano models on-device for common use cases. For example, Google demonstrated an AI-powered sample app called Androidify that creates a custom Android robot avatar of you from a selfie. Developers can plug into these APIs to add capabilities like image editing, content generation, or AI responses right into their Android apps without needing huge cloud models. Google also introduced Gemini-powered assistants in Android Studio to help developers during coding (more on that in the Developer Tools section). Overall, Android is being positioned as an AI-native platform – so expect your future apps and mobile games to feel a lot more smart and context-aware. For users, this means more Android apps that can generate content, personalize to your style, and even act autonomously (with your permission). It’s a big step toward Google’s vision of phones that don’t just run apps, but also have an “always-on” AI to help you within those apps.
- Android XR: the next frontier of AR/VR: Perhaps the most exciting Android news wasn’t about phones at all – it was Google’s push into immersive Augmented and Virtual Reality via Android XR. Android XR is a new platform (built on Android) tailored for XR devices like VR headsets and AR glasses. At I/O 2025, Google gave us our first real look: They showcased Samsung’s Project Moohan, an upcoming XR headset that runs Android XR and offers “infinite screen” virtual experiences. The headset, launching later this year, is standalone (no PC required) and is seen as a potential Android answer to Apple’s Vision Pro. More impressively, Google demoed prototype Android XR smart glasses – lightweight glasses equipped with a camera, mics, speakers, and an optional in-lens display. When paired with your phone and powered by on-board Gemini AI, these glasses can act as a context-aware wearable assistant. In real-world demos, Google showed the glasses translating a bilingual conversation live (displaying subtitles in your field of view) and giving turn-by-turn walking directions overlaid on the streets in front of you. They also let you capture photos or send messages hands-free with just voice commands. This sneak peek illustrated how an “Android + Gemini” combo on glasses could let you see information about everything you’re looking at and communicate seamlessly, all without pulling out your phone. Google announced that these AR glasses are in the hands of trusted testers now to refine the experience and privacy safeguards. To make sure the glasses are stylish and wearable, Google is partnering with trendy eyewear brands like Gentle Monster and Warby Parker for future frames. They’re also working with Samsung on a reference design for glasses, expanding their partnership beyond the initial headset. All of this signals that Google is serious about getting AR glasses (not just VR headsets) right this time – learning from the old Google Glass missteps. It will take time to build out the hardware and app ecosystem (developers can start later this year on the XR platform), but Android XR could unify the currently fragmented AR/VR landscape under a common, open ecosystem. For consumers, it means in a couple of years we might be choosing between Android or Apple when it comes to the smart glasses on our face.
- Wear OS 6 and mobile updates: On the wearable front, Google announced a Developer Preview of Wear OS 6, the next-gen operating system for Android smartwatches. Wear OS 6 adopts the new Material 3 “Expressive” design language tailored for round watch screens, bringing updated UI components that look more at home on a watch. It also introduces better watch face development tools and a Credential Manager API for seamless authentication on watches. For users, this likely means upcoming Wear OS watches (think Pixel Watch 3 and others in late 2025) will have a fresh look and smoother, more integrated apps – all while maintaining compatibility thanks to these developer previews. Google also used I/O to tout the continuing expansion of Android across device types: they highlighted Android’s reach on tablets, foldables, Auto, TV, and now XR, emphasizing new tools to help developers build apps that adapt across these 500M+ devices in the Android ecosystem. In short, Android isn’t just a phone OS anymore – it’s an all-devices OS – and Google is polishing each form factor with modern design and AI smarts.
- Google Play enhancements: For app developers and entrepreneurs, Google Play received some welcome updates aimed at improving app quality and monetization. A new Play Console redesign focuses on key developer objectives (like Testing, Quality, Growth, and Monetization) and provides dedicated overview dashboards for each. These dashboards pull together relevant metrics and even give “Take action” suggestions to help developers fix issues or boost engagement. Notably, Google Play will soon allow developers to halt a full rollout even after an app update reaches 100% of users. This “undo release” button is a big deal – it means if a bad bug slips through, you’re no longer completely stuck waiting to push a new patch. On the business side, Google is rolling out new subscription capabilities that make checkout easier and reduce subscriber churn (likely things like streamlined free trials, upgrade/downgrade flows, etc.). And for users, the Play Store is becoming more “content-rich” to foster repeat engagement – think more editorial content, stories, and videos to help you discover apps you’ll love. All these changes show Google Play maturing into a platform that not only delivers apps, but also helps developers continually improve their apps and build sustainable businesses.
Hardware Highlights: Pixel, Wearables and More
- Pixel 9a: Affordable phone with serious AI smarts – While the I/O keynote focused on software, Google quietly expanded its hardware lineup just before the event with the launch of the Pixel 9a. Priced at just $499, Pixel 9a brings many of the Pixel 9 series’ premium features down to the midrange. It sports a fresh design with a bright 6.3-inch display and the best camera system you’ll find under $500 – including a 48MP main camera and 13MP ultrawide that produce stunning photos. Google didn’t skimp on the software tricks either: Pixel 9a includes all the AI-powered camera features Pixels are known for, like Magic Eraser for removing unwanted objects, Photo Unblur, Best Take for group photos, and even the new Magic Editor that can reframe and edit photos with generative AI. Under the hood, it’s the first A-series Pixel to feature Gemini Nano, Google’s on-device AI model, as a personal assistant. This means Pixel 9a users get the benefits of the Gemini AI (like smarter recommendations and the new Gemini Live voice assistant mode) without needing a flagship phone. Google is promising an impressive 7 years of OS and security updates for the 9a, making it a longevity champion in its class. The Pixel 9a may not have been a headline announcement at I/O, but it embodies Google’s theme of “AI everywhere” – bringing advanced AI capabilities to a budget-friendly device that more people can access. For anyone who wants Pixel’s AI camera and assistant features without breaking the bank, the 9a is a big win.
- Sneak peek at AR/VR devices: Google’s hardware future is firmly entwined with Android XR. As described above, I/O 2025 offered glimpses of two notable upcoming devices: Samsung’s Project Moohan VR headset and Google’s own prototype AR glasses. While not available yet, these represent Google’s vision for post-phone computing. The Samsung headset (expected late 2025) is like a Quest-style device but supercharged with Google’s software – a sign of the Android+Samsung partnership extending to XR. The AR glasses, still in testing, show Google applying its Pixel prowess to wearables – potentially these could become the “Pixel Glasses” down the line. Google also noted it’s partnering with other manufacturers (like optical companies and even Chinese AR startup Xreal) to seed an ecosystem of devices on Android XR. The key takeaway: unlike the original Google Glass which was a lone experiment, this time Google is fostering a whole family of AR/VR hardware built on its platform. If successful, Android XR hardware from multiple brands could compete collectively against Apple’s singular approach in AR/VR.
- Google Beam: 3D video calls become reality – One of the coolest new “products” announced wasn’t a traditional gadget, but a next-gen communication platform. Project Starline, Google’s long-running research into lifelike 3D telepresence, is graduating into a real product called Google Beam. Beam uses AI and specialized display tech to turn standard video calls into hologram-like 3D conversations, making it feel like you’re sitting in the same room as the other person. At I/O, Google revealed that later this year the first Beam devices will ship, built in partnership with hardware maker HP and integrated with Zoom for enterprise customers. The demos showed how Beam can capture your likeness in 3D and project it to someone else’s Beam booth, complete with spatial audio and depth – so eye contact and body language are preserved much better than on a flat screen. This has huge implications for remote work, education, telemedicine, and staying in touch with family afar. Google is initially positioning Beam for businesses (where the cost of a dedicated booth can be justified), but as the technology matures and potentially shrinks, we could see it in homes down the line. By leveraging AI to reconstruct 3D scenes from regular cameras, Google Beam aims to make “virtual presence” feel real. It’s the kind of moonshot hardware project (akin to Microsoft’s Hololens or Meta’s VR spaces) that could redefine how we communicate – and it’s impressive to see Google bringing it to market after years in the lab. Keep an eye out for the first public demos of Beam at trade shows like InfoComm in a few weeks, where HP will showcase the tech.
- Other hardware notes: No new Pixel flagships or tablets were introduced at I/O 2025 (those typically come in the fall), but Google’s device lineup is steadily marching forward. The Pixel 9 series from last year will soon get the latest Android updates and feature drops (influenced by I/O’s announcements). We may also see Pixel Watch 3 later in the year incorporating the Wear OS 6 improvements and perhaps more Fitbit integration. And while not explicitly announced, Google’s continued work on custom AI chips (TPUs) and mobile SoC improvements suggests future Pixels will only get more AI-capable. In summary, I/O 2025’s hardware story was about the road ahead – laying the groundwork for new kinds of devices (AR, 3D calling) – while the immediate hardware focus is on making sure current Google devices, like the Pixel 9a and Wear OS watches, seamlessly tap into the burgeoning AI ecosystem.
Developer Tools and Platforms
- Gemini Code Assist & Jules – AI pair programmers: To boost developer productivity, Google announced the general availability of its AI coding assistants. Gemini Code Assist (both the individual version and the GitHub integration) is now out of preview and free for all developers to use. It’s powered by the latest Gemini 2.5 models, offering much improved code generation and transformation capabilities. Essentially, Code Assist acts like a ChatGPT/Copilot-style helper right in your IDE – you can ask it to generate a function, debug an error, or even build a visually compelling web app UI from a description. For example, Google says Code Assist excels at tasks like creating responsive layouts and editing code to optimize performance. In tandem, Google introduced Jules, an “autonomous coding agent” that handles larger-scale codebase tasks asynchronously. Now in open beta for all developers, Jules can be assigned multiple to-dos (like upgrading library versions, refactoring code, or adding a feature) and will spin up its own cloud VM to work through your repository. It even runs tests and can open a pull request with its changes when done. This is like having a junior developer robot that works in parallel to you. The implications for developers are significant: routine or boilerplate tasks can be offloaded to AI, allowing human devs to focus on creative and complex parts of software building. Some attendees dubbed it “CI/CD meets AI” – we are likely to see faster development cycles and maybe even one-person teams accomplishing what used to take many, thanks to these tools.
- Firebase gets AI-savvy: Google’s popular mobile/backend platform Firebase introduced a slew of new features to help developers build AI-powered apps more easily. One highlight is updates to Firebase Extensions, which now include pre-built integrations for generative AI – think extensions that can, say, summarize text or analyze sentiment using Gemini behind the scenes. Firebase also showcased its new Firebase Studio (launched earlier this year) which provides a unified interface to prototype and manage app content, now enhanced with Firebase AI Logic for integrating AI decisions into your app flow. The goal is to let developers add AI-driven functionality (recommendations, chat responses, image transformations, etc.) without needing to become ML experts. With a few clicks, a Firebase app can call on a Google-hosted model or API. Additionally, Firebase announced better support for local-first machine learning (on-device AI), tying in with the ML Kit GenAI on Android mentioned earlier. This end-to-end focus – from cloud to device – positions Firebase as an AI-centric app platform, which is great news for the huge community of web and app developers that rely on it. We can expect a wave of Firebase apps that offer smarter user experiences by leveraging these services (for instance, a chat app that auto-moderates content or a shopping app that offers AI styling advice, all thanks to a few Firebase extensions).
- Flutter 3.32 and Dart 3.8: For the cross-platform developers, Google’s Flutter UI toolkit got a fresh update to version 3.32 (with Dart language v3.8). The focus was on developer productivity and app performance enhancements. Flutter 3.32 introduces new widgets and improvements that make it faster to build adaptive layouts – so your UI can smoothly scale between phone, tablet, desktop, and embedded screens. It also has better integration with Material 3 design out of the box, and an improved Element/Render object model under the hood that boosts rendering speed for complex interfaces. Additionally, Dart 3.8 brings some language refinements and performance tuning (like faster memory allocation and new lint rules for safer code). Google boasted that several teams (including Google’s own Cloud team) have dramatically accelerated their development by switching to Flutter for multi-platform apps. With this update, Flutter continues its march toward being a one-stop framework for apps on Android, iOS, web and beyond – now with the tooling maturity and stability that enterprises need. For developers, the message is clear: Flutter is production-ready at scale, and it’s getting even more efficient so you can code once and deploy everywhere faster than before.
- Android Studio gets AI upgrades: Google is also baking AI into the tools Android developers use. In Android Studio, they previewed “Gemini in Android Studio” features – notably, Journeys and the Version Upgrade Assistant. Journeys is an AI agent that helps write and execute end-to-end test scenarios: you can describe a user flow in natural language, and it will generate the test code and even run it for you. This can save countless hours writing tedious UI tests. The Version Upgrade Agent does exactly what it sounds like – it uses AI to automatically update your app’s dependencies to the latest versions, including reading through release notes and making necessary code changes. Anyone who has maintained an app knows how time-consuming upgrades can be, so this agent could be a lifesaver for keeping projects fresh and secure. These are early previews, but they highlight a theme: Google is infusing AI into developer workflows, not just into user-facing products. The result could be faster development cycles and fewer grunt-work tasks for humans. Google also announced general availability of the Live API for Gemini (allowing apps to accept audio-visual input and provide native audio output) and new text-to-speech capabilities in Gemini 2.5 with support for multiple voices in 24+ languages – all tools that developers can use to make their apps more interactive and multimodal.
- Open-source multimodal models (Gemma family): In a notable shift, Google is embracing the open-source AI community with new releases in the Gemma model family (not to be confused with Gemini – Gemma is separate). They announced Gemma 3n Preview, which is an efficient, mobile-first multimodal model that can handle text, images, audio, and even video inputs. Gemma 3n is designed to run on phones and laptops with limited resources, offering flexibility and privacy for on-device AI tasks. Google is rolling it out on AI Studio and Cloud, with plans to integrate it with open-source tools soon. For specialized domains, Google also introduced MedGemma, an open model tailored for medical applications that can analyze medical texts and images (think assisting in diagnostics or research). And to help with accessibility, they previewed SignGemma, a model that can interpret sign language gestures from video and translate them into text in real time. This could empower developers to build apps for the Deaf and hard-of-hearing community (for instance, an app that captions sign language for non-signers). By releasing these models (or at least offering previews with intention to open-source), Google is encouraging developers to innovate on top of them. It’s a recognition that not every use-case will be served by Google’s closed APIs – sometimes the community will take a base model and fine-tune it for niche scenarios. In summary, Google is extending an olive branch to open-source AI developers, providing powerful base models that can run locally and be adapted widely. This could spur a wave of independent AI applications far beyond Google’s own imagination, from healthcare to education to assistive tech.
Search and Core Products: AI-Infused Search, Maps, and More
- Google Search becomes an AI-powered assistant: The traditional Google Search is transforming dramatically with the roll-out of AI Mode (formerly known as Search Generative Experience in labs). At I/O, Google announced that AI Mode is now starting to roll out to all users in the U.S. – no waitlist needed. With AI Mode enabled, Search will not just show links, but also generate conversational answers at the top of your results for broad queries (“Tell me about the Cairo food scene”). It’s like having Bard integrated into Search, grounded with real-time web information. For deeper inquiries, Google is introducing a Labs experiment called “Deep Search,” which will use Gemini to provide more thorough, analytical responses for those who want a deep-dive explanation. For example, you could ask something complex like “compare the economic policies of two countries” and get a detailed, sourced analysis rather than a brief snippet. Perhaps the most futuristic feature showcased: Search Live. Using technology from DeepMind’s Project Astra, Search Live will allow you to have a back-and-forth conversation with Search about what you’re seeing through your camera. Imagine pointing your phone at a landmark or a product – you can then ask questions orally (“What is this building’s history?” or “Does this appliance come in other colors?”) and get answers in real time, with the AI understanding the context of the camera view. This effectively turns Search into a vision AI assistant for the real world. Google said Search Live is coming this summer for certain use cases. They’re also testing agentive abilities in Search via Project Mariner, letting the AI not just tell you information but help you accomplish tasks like booking event tickets or making restaurant reservations right from the Search interface. All told, Search is evolving from a static query-response engine into a dynamic conversational helper. For users, this means more direct answers and help from Google (fewer clicks to third-party sites for simple needs), but Google emphasized it’s continuing to send traffic to publishers for detailed content. Early data shows these AI results have increased user engagement with search by over 10% in key markets. The implications are huge: SEO as we know it will shift (as AI synthesized answers become common), and users will come to expect natural language interactions and even proactive help when using Google.
- Shop with AI: virtual try-on and more: In the realm of e-commerce, Google is leveraging AI to make shopping more personalized. One headline feature is the new virtual try-on for apparel integrated into Google Search. Now, when you’re browsing clothing on Search (such as jackets or dresses), you can tap a “try it on” button and upload a photo of yourself, and Google will virtually fit the garment onto your photo. Thanks to diffusion models trained on tons of product images and a diversity of body types, the result is a realistic visualization of how that piece of clothing would look on you – size, drape, and even lighting considered. This virtual fitting room initially works with women’s tops from major brands and is expanding to other categories. It’s rolling out to U.S. users via Search Labs (you can opt in to test it now). Additionally, Google showcased an AI-powered shopping guide in Search’s new AI Mode. If you ask something like “I need a new EV stroller for jogging,” the AI will have a nuanced conversation with you about considerations (terrain, child age, budget) and present options from Google’s Shopping Graph that fit your needs. It’s a far cry from just ten blue links – it’s more like a personal shopper assistant. They also introduced an “agentic checkout” feature: you can tell the AI your target price for a product and it will proactively track and alert you if the price drops to your range. The bottom line: Google is using AI to reduce the friction in online shopping, from discovery to decision to purchase. Competing e-commerce platforms will have to keep up with these rich, AI-driven experiences. And for users, buying stuff online might become much more interactive and confidence-inducing (no more guessing how that shirt might fit – you can see it).
- Smarter Google Maps and contextual help: While Maps didn’t have a specific new feature announcement, it played a supporting role in many demos – from Gemini Live being able to add locations to your Maps app, to Android XR glasses showing Maps directions in your view. One subtle but important update: Google is now connecting more of its apps to AI assistants. For instance, Gemini Live (the AI mode on phones and Pixel devices) will soon be able to interface with Google Maps, Calendar, Tasks, and Keep to perform actions and fetch info mid-conversation. This means you could be chatting with the AI and say “Remind me to visit this museum” and it will quietly add that to your Calendar or Tasks. Or ask, “How far is the airport from my hotel?” and it can pull the distance from Maps. This kind of integration turns Google’s services into a mesh of interconnected helpers rather than siloed apps. We can expect Google Maps specifically to keep getting more AI features – perhaps predictive navigation (anticipating where you want to go based on context) or conversational queries (“Find scenic routes near me that avoid tolls”). At I/O they also mentioned immersive maps experiences (like the Immersive View introduced earlier for cities) will continue to expand. All in all, Maps is steadily becoming more than a navigation app – it’s part of Google’s ambient computing fabric where information flows freely to where it’s needed, when it’s needed.
- Gmail and Workspace enhancements: Google’s core productivity apps are also getting AI makeovers. In Gmail, they announced “Help me reply” is evolving into truly personalized smart replies that match your own writing tone and context. Later this year, Gmail will be able to draft reply suggestions by pulling in relevant details from your past emails or Drive files, and crucially, it will adjust the style to sound like you. No more oddly generic AI replies – if you tend to be brief and friendly, the AI will compose something similar; if your emails are lengthy and formal, it’ll do that. Google is very aware that email is personal, so this feature could be a game-changer in making AI assistance feel seamless rather than awkward. Similarly, Google Docs and the rest of Workspace will keep refining their “Duet AI” features – e.g. auto-generating documents, formulas in Sheets, speaker notes in Slides, etc., with more user control and context. Google shared that these enhancements aim to save office workers time on rote work (like summarizing long threads or drafting routine docs) so they can focus on high-level thinking. One impressive addition is in Google Meet: they turned on live speech translation in Meet calls, where the translated speech retains the speaker’s voice and tone thanks to advanced speech synthesis. So if someone speaks Spanish and you hear English, it will still sound a bit like their voice, just translated – making multilingual meetings feel far more natural. This is enabled by Google’s multi-speaker text-to-speech models (part of Gemini) that can mimic voices. It’s currently in early access for select languages. Combined with live captions and summaries, Meet is becoming an AI-powered communication hub that breaks language barriers in real time.
- NotebookLM and the future of note-taking: Last year, Google introduced Project NotebookLM in Labs – an AI tool to help organize and summarize information (originally codenamed “Project Tailwind”). At I/O 2025, they announced the new NotebookLM mobile app is launching on Android and iOS. This app acts like your personal research assistant: you can feed it PDFs, Google Docs, images, or notes, and it will generate Audio Overviews and summaries that you can listen to on the go. You can also now choose how long or detailed you want these AI-generated summaries to be. A forthcoming update will even offer Video Overviews, turning dense documents into annotated video summaries. Google demonstrated an example where NotebookLM ingested a lengthy report and produced a short narrated video with key points and visuals. This could be transformative for students and analysts – imagine digesting a 50-page research paper in a 2-minute AI video. NotebookLM basically applies Gemini’s generative abilities to the problem of information overload, helping users learn quicker. By releasing a dedicated app, Google is signaling this is an area of focus. It ties in with their education initiatives too – alongside things like Learn About (a conversational learning experiment) and new features in Google Classroom, all aimed at making learning more interactive with AI. If you’re someone who deals with a lot of reading material (knowledge workers, students, etc.), NotebookLM and its upcoming features might become your favorite productivity hack.
- SynthID and AI content responsibility: One more core update worth noting is Google’s efforts in AI responsibility and content authenticity. They announced an expansion of SynthID, a tool that embeds invisible watermarks into AI-generated images to identify them later. At I/O, Google opened up the SynthID Detector – a portal where journalists, creators, and eventually anyone can upload an image to see if it was AI-generated by a Google model. Since launching last year, SynthID has watermarked over 10 billion AI images, and this detector is a response to the increasing concern about deepfakes and misinformation. It’s not a flashy consumer feature, but it’s very much part of Google’s “core product” responsibilities – ensuring that as they flood their apps with AI content, they also provide ways to verify and filter AI vs. human-produced media. In Gmail and Docs, for instance, Google will tag AI-generated text or images with metadata. And in Search, they’re working on marking images that are AI-created in results. This is an ongoing challenge industry-wide, but Google is trying to set a standard for transparency.
New Products and Platforms Announced
- Google AI Ultra & Pro – new AI subscriptions: As detailed earlier, Google introduced Google AI Ultra and Google AI Pro subscription plans. These essentially turn Google’s suite of AI software into a premium product offering for the first time. Ultra, at $250/month, is like the “all you can eat” plan for AI – unlimited (or very high) usage of Gemini’s most powerful models, priority access to new features (like the latest Imagen and Veo models in the Gemini app), and extras like cloud storage. It’s aimed at enthusiasts, researchers, and enterprises that rely heavily on AI. The Pro plan at $20/month is more consumer-friendly, bundling things like the new Flow tool for video creation, the NotebookLM app with advanced features, and higher limits for everyday AI use. By launching these, Google is effectively creating a new consumer product category: AI-as-a-service. It will be interesting to see how many people subscribe (especially to Ultra). The existence of a paid tier also implies that the free experiences might remain somewhat limited or have upsells (“This task requires Ultra”). From a business perspective, Google is diversifying beyond ad revenue – if AI Ultra takes off, it’s a subscription revenue stream similar to what Microsoft is doing with Copilot for Office. For users, if you really love playing with AI or need it for work (e.g. a writer, developer, or designer leveraging these tools), the Pro tier might be compelling. Google also indicated they will expand these plans to more countries over time. One clever inclusion: Ultra comes with YouTube Premium – likely to sweeten the deal and possibly to justify the high cost with additional value. In summary, Google AI is now a branded product, not just a background capability.
- Google Beam – the 3D calling platform: Google Beam, covered above, is essentially a new platform for immersive communication. It’s both hardware (the specialized booth/display) and software (AI for image reconstruction, compression, etc.). At I/O, Google framed Beam as part of its mission to “communicate better, in near real time” no matter the distance. Launching with enterprise partners puts it in the same realm as something like Cisco Telepresence or Zoom Rooms – but with a big leap in realism. Beam could be considered a new Google product line (likely under Google Cloud or Enterprise offerings). We might even see Google branding around it if it expands – e.g. Beam-enabled conference rooms, or a “Powered by Google Beam” certification for hardware. The key point is that Beam is moving out of R&D and becoming something real customers can buy/use in 2025. That’s an announcement many have been waiting for since Starline was first demoed in 2021. If all goes well, Google Beam might trickle down to more accessible formats (perhaps smaller devices or even VR-based implementations) and could become a staple in remote collaboration. This aligns with Google’s overarching theme this I/O of bringing forward-looking tech (AI, AR, 3D) into practical reality.
- Flow – AI filmmaking as a platform: While Flow was mentioned as a creative tool, it’s worth noting it represents a new creative platform for Google. They even launched a site called Flow TV to showcase AI-made short films and content as inspiration. Flow isn’t just an app; it’s an ecosystem where Google may collaborate with artists, integrate with YouTube (imagine one-click sharing of AI films), and continuously improve generative models for video. With generative AI, Google essentially has a new “content platform” where the users create the content with AI. It’s not exactly a social network, but it could drive engagement (people spending time crafting and watching AI-generated stories). Especially with partnerships like the Primordial Soup film project, Google is positioning itself at the intersection of AI and Hollywood. If Flow gains traction, we may see an influx of AI-assisted indie films, YouTube videos, even ads – all made more accessible by this platform. Google likely will iterate on Flow with feedback from the creative community. In a few years, it might be as normal to use “Google Flow” to storyboard and produce a video as it is to use Google Docs to write a report.
- Other new launches: A few other noteworthy new products/platforms: Google launched a new Google Cloud and NVIDIA developer community, signaling close collaboration on AI infrastructure and giving devs a forum to share GPU and AI knowledge (essentially supporting the AI developer ecosystem beyond Google’s own models). They also talked about the Vertex AI Agent Engine and Agent2Agent Protocol – new cloud tools that let developers orchestrate multiple AI agents that can talk to each other. This is a new paradigm for building software – instead of one AI doing everything, you might have a set of specialized agents (one for logic, one for UI, one for external API calls) collaborating. Google providing the backbone for that (ADK and A2A protocol) is a big platform play, likely to compete with startups in the “AI agents” space. Finally, in the consumer space, Google’s Sparkify experiment was effectively a mini product announcement: it’s a fun web tool that turns your question into a short animated video (using Gemini and Veo). It’s in waitlist mode, but it hints at where things like Google’s Arts & Culture experiments might go – turning text into multimedia experiences. These smaller launches show Google’s willingness to spin up new apps and platforms to explore what resonates with users in the age of AI.
So, what’s next?
Google I/O 2025 painted a clear picture: Google is “all-in” on AI and is actively weaving it into every product and platform it operates. Coming out of the conference, the tech world is buzzing with excitement – and a healthy dose of questions. Here are a few parting thoughts on what comes after this AI-packed I/O:
- AI ubiquity vs. usefulness: Google showed AI doing everything from writing emails to providing AR overlays on city streets. The next challenge is making sure these features genuinely help users day-to-day and aren’t just flashy demos. Real-world testing will be key – for instance, can the new AI Mode in Search consistently provide accurate answers that users trust? Will people find having a continuous conversation with Search or Gemini more convenient than traditional app workflows? Over the coming months, as these features hit beta and general availability, we’ll see which ones stick. Google’s sheer scale (billions of users across Search, Android, etc.) means even a subtle AI tweak can impact daily life – so expect a period of adjustment as users learn how to best use these tools. Google will have to iterate rapidly based on feedback, refining the AI’s quality and tone. If done right, in a year’s time using Google’s services might feel like interacting with a smart friend who knows you well – a far cry from the static software of the past.
- Developer and ecosystem adoption: Google dumped a treasure trove of new tech into developers’ laps – but will devs jump on board? The enthusiasm at I/O was high, especially for things like Gemini’s APIs, the new Android AI features, and open-source models. However, developers will weigh factors like costs (e.g. using Gemini via Google Cloud likely incurs fees), complexity, and reliability. Google’s competitors (OpenAI, Azure, AWS, Meta’s open-source AI, etc.) are also vying for developers’ hearts and mindshare. Over the next year, we’ll see a race in the dev community to experiment with these tools. If Google’s offerings prove developer-friendly (e.g. easy SDKs, generous free tiers, good documentation), it could solidify Google’s position as the default platform for building AI-enhanced apps. The launch of AI Ultra and Pro also hints at monetization of APIs – some advanced features might sit behind paywalls for devs and users. Google will need to balance making the platform widely accessible with justifying the value of paid plans. The fact that 7 million developers are already building with Gemini (5× more than last year) is promising, and that number is likely to surge. Key ecosystem players – like big app companies, enterprise IT departments, and hardware manufacturers (Samsung, OEMs for XR) – have decisions to make too, about integrating Google’s tech. I/O 2025 planted seeds; by I/O 2026 we’ll know which of those seeds are flourishing in the wild.
- Hardware execution and competition: Google’s vision of ambient AI won’t materialize without hardware to host it. The previews of AR glasses and Beam were tantalizing, but bringing those to market successfully is a huge undertaking. Apple, Meta, Microsoft, and others are all vying for the AR/VR space. Google’s advantage is the Android ecosystem and its prowess in AI; its challenge is hardware focus and marketing (Pixel devices have slowly grown but still niche, and past AR projects were shelved). The next year will be telling – can Google and Samsung deliver a compelling XR headset that gains developer traction before Apple’s ecosystem locks in AR developers? Will the trusted tester program for AR glasses yield a product that’s socially acceptable and useful, or will it remain a prototype? The partnerships with eyewear brands show Google learned that tech alone isn’t enough – fashion and comfort matter. Meanwhile, in the phone world, Google will likely integrate more AI on-device (the mention of Gemini Nano in Pixel 9a is a hint of more to come). We might see Tensor chips with beefier AI cores to run these models locally. Competitors like Apple are rumored to be working on their own AI assistants and custom silicon – the AI platform battle is in full swing. Google’s I/O announcements may spur others to accelerate (we might hear about an “Apple GPT” or new AI features in iOS). Ultimately, consumers benefit from this competition, but Google wants to set the narrative that its AI is the most capable and most responsible.
- Ethics, privacy and trust: With great power comes great responsibility – and Google, to its credit, frequently nodded to AI ethics during I/O. Yet, concerns remain. Users will have to trust Google’s AI with more data (e.g. letting Gemini read your Gmail/Drive to personalize answers). Google will need to maintain rigorous privacy controls and transparency so that this doesn’t feel creepy. They reiterated that on-device processing will be used wherever possible (for speed and privacy) and that security is a priority (as seen in Gemini’s improved safeguards). Still, the more human-like these AIs become, the more people might anthropomorphize them or over-rely on them. Google, like others, faces the task of educating users on AI’s limits – e.g. that it can still make mistakes (hallucinations) or that it doesn’t truly “understand” like a person. SynthID and watermarking are a good step toward combating deepfakes, but detection will be an arms race. Policymakers are closely watching these developments too. We might see regulatory guidelines on AI deployments in the EU and elsewhere that Google will have to navigate. In short, Google’s slogan of building “AI that is helpful, not harmful” will be constantly tested in the real world.
Google I/O 2025 will be remembered as the moment Google pivoted from an information company to an AI company. The announcements weren’t just about new gadgets or incremental updates – they outlined a future where AI is the interface for everything. From how we search, to how we communicate, create, and get things done, Google is infusing its products with intelligence that adapts to us. The coming year will show us how well Google can execute on this vast vision. Will Gemini truly deliver next-level assistance that changes how we use our devices? Come next I/O, will those AR glasses be on someone’s face during the keynote? One thing’s for sure: Google is not slowing down. As Sundar Pichai noted, the usage of their AI models has exploded 50× over the last year. The momentum is there, and Google intends to ride this wave so that its AI becomes an indispensable part of our everyday lives. The journey from research to reality is underway – and if I/O 2025 is any indicator, the next few years are going to be a thrilling ride for tech enthusiasts and users alike.
Learn more at my Arabic YouTube channel: https://www.youtube.com/@ePreneurs — and my English channel: https://www.youtube.com/@hossamudinai
Citations
Google I/O 2025: 100 things Google announced
Yesterday at Google I/O, we shared how we’re taking the progress we’re making in AI and applying it across our products. Major upgrades are coming to our Gemini app, our generative AI tools and everything in between — including some truly incredible progress we’re making with our AI models (and new ways you can access them).blog.googleGoogle I/O 2025: 100 things Google announced99. As Sundar shared in his opening keynote, people are adopting AI more than ever before. As one example: This time last year, we were processing 9.7 trillion tokens a month across our products and APIs. Now, we’re processing over 480 trillion — 50 times more.developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers BlogBuilding experiences with generative AI: Generative AI enhances apps by making them intelligent, personalized, and agentic. We announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks. We showcased an AI sample app, Androidify, which lets you create an Android robot of yourself using a selfie. Discover how Androidify is built, and read the developer documentation to get started.blog.googleGoogle I/O 2025: 100 things Google announcedbetween multiple agents.android-developers.googleblog.comAndroid Developers Blog: I/O 2025: What’s new in Google PlayAt this year’s Google I/O, we unveiled the latest ways we’re empowering your success with new tools that provide robust testing and actionable insights. We also showcased how we’re continuing to build a content-rich Play Store that fosters repeat engagement alongside new subscription capabilities that streamline checkout and reduce churn.blog.googleGoogle I/O 2025: 100 things Google announced59. We took a look at the first Android XR device coming later this year: Samsung’s Project Moohan. This headset will offer immersive experiences on an infinite screen.blog.googleGoogle I/O 2025: 100 things Google announced65. A few years ago, we introduced Project Starline, a research project that enabled remote conversations that used 3D video technology to make it feel like two people were in the same room. Now, it’s evolving into a new platform called Google Beam.blog.googleGoogle I/O 2025: 100 things Google announced83. Flutter 3.32 has new features designed to accelerate development and enhance apps.blog.googleGoogle I/O 2025: 100 things Google announced88. Firebase announced new features and tools to help developers build AI-powered apps more easily, including updates to the recently launched Firebase Studio and Firebase AI Logic, which enables developers to integrate AI into their apps faster.blog.googleGoogle I/O 2025: 100 things Google announced74. Gemma 3n is our latest fast and efficient open multimodal model that’s engineered to run smoothly on your phones, laptops, and tablets. It handles audio, text, image, and video. The initial rollout is underway on Google AI Studio and Google Cloud with plans to expand to open- source tools in the coming weeks.blog.googleGoogle I/O 2025: 100 things Google announced77. SignGemma is an upcoming open model that translates sign language into spoken language text, (best at American Sign Language to English), enabling developers to create new apps and integrations for Deaf and Hard of Hearing users.blog.googleGoogle I/O 2025: 100 things Google announced85. Try it now! Developer Preview for Wear OS 6 introduces Material 3 Expressive and updated developer tools for Watch Faces, richer media controls and the Credential Manager for authentication.blog.googleGoogle I/O 2025: 100 things Google announced73. Try it now! Jules is a parallel, asynchronous agent for your GitHub repositories to help you improve and understand your codebase. It is now open to all developers in beta. With Jules you can delegate multiple backlog items and coding tasks at the same time, and even get an audio overview of all the recent updates to your codebase.blog.googleGoogle I/O 2025: 100 things Google announced1. Try it now! AI Mode is starting to roll out for everyone in the U.S. right on Search. But if you want to get access right away, opt in via Labs. 2. For questions where you want an even more thorough response, we’re bringing deep research capabilities into AI Mode in Labs, with Deep Search. 3. Live capabilities from Project Astra are coming to AI Mode in Labs. With Search Live, coming this summer, you can talk back-and-forth with Search about what you see in real-time, using your camera.blog.googleGoogle I/O 2025: 100 things Google announced6. We’re introducing a new AI Mode shopping experience that brings together advanced AI capabilities with our Shopping Graph to help you browse for inspiration, think through considerations and find the right product for you. 7. Try it now! You can 69 just by uploading a photo of yourself. Our “try on” experiment is rolling out to Search Labs users in the U.S. starting today — opt in to try it out now. 8. We also showed off a new agentic checkout to help you buy at a price that fits your budget with ease. Just tap “track price” on any product listing,blog.googleGoogle I/O 2025: 100 things Google announced90. Gmail is getting new, personalized smart replies that incorporate your own context and tone. They’ll pull from your past emails and files in your Drive to draft a response, while also matching your typical tone so your replies sound like you. Try it yourself later this year.blog.googleGoogle I/O 2025: 100 things Google announced68. We announced speech translation, which is available now in Google Meet. This translation feature not only happens in near real-time, thanks to Google AI, but it’s able to maintain the quality, tone, and expressiveness of someone’s voice. The free-flowing conversation enables people to understand each other and feel connected, with no language barrier.blog.googleGoogle I/O 2025: 100 things Google announced21. With our latest update, Gemini 2.5 Pro is now the world-leading model across the WebDev Arena and LMArena leaderboards.blog.googleGoogle I/O 2025: 100 things Google announced25. 2.5 Pro will get even better with Deep Think, an experimental, enhanced reasoning mode for highly-complex math and coding.blog.googleGoogle I/O 2025: 100 things Google announced26. We’re bringing new capabilities to both 2.5 Pro and 90, including advanced security safeguards. Our new security approach helped significantly increase Gemini’s protection rate against indirect prompt injection attacks during tool use, making Gemini 2.5 our most secure model family to date.blog.googleGoogle I/O 2025: 100 things Google announced12. Try it now! Now Gemini is an even better study partner with our new interactive quiz feature. Simply ask Gemini to “create a practice quiz on…” and Gemini will generate questions.blog.googleGoogle I/O 2025: 100 things Google announced13. In the coming weeks we’ll also make Gemini Live more personal by connecting some of your favorite Google apps so you can take actions mid- conversation, like adding something to your calendar or asking for more details about a location. We’re starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later.blog.googleGoogle I/O 2025: 100 things Google announced14. Try it now! Starting today, camera and screen sharing capabilities for Gemini Live are beginning to roll out beyond Android to Gemini app users on iOS.blog.googleGoogle I/O 2025: 100 things Google announcedand customize the sources Deep Research pulls from, like academic literature.blog.googleGoogle I/O 2025: News and announcementsAt Google I/O 2025, our annual developer conference, we shared how we’re using our cutting-edge technology to build products that are intelligent and personalized — and that can take action for you.blog.googleGoogle I/O 2025: News and announcementsGoogle DeepMind 63blog.googleGoogle I/O 2025: 100 things Google announced54. We’re working to extend our best multimodal foundation model, Gemini 2.5 Pro, to become a “world model” that can make plans and imagine new experiences by understanding and simulating aspects of the world, just as the brain does.blog.googleGoogle I/O 2025: 100 things Google announcedcomplementing the skills and tools they already use.blog.googleGoogle I/O 2025: 100 things Google announced56. And as part of our Project Astra research, we partnered with the visual interpreting service Aira to build a prototype that assists members of the blind and low-vision community with everyday tasks, complementing the skills and tools they already use.blog.googleGoogle I/O 2025: 100 things Google announced55. Updates to Project Astra, our research prototype that explores the capabilities of a universal AI assistant, include more natural voice output with native audio, improved memory and computer control. Over time we’ll bring these new capabilities to Gemini Live and new experiences in Search, Live API for devs and new form factors like Android XR glasses.blog.googleGoogle I/O 2025: 100 things Google announced60. And we shared a sneak peek at how Gemini will work on glasses with Android XR in real-world scenarios, including messaging friends, making appointments, asking for turn-by-turn directions, taking photos and more.blog.googleGoogle I/O 2025: 100 things Google announced61. We even demoed live language translation between two people, showing the potential for these glasses to break down language barriers.blog.googleGoogle I/O 2025: 100 things Google announced36. Try it now! We announced Veo 3, which lets you generate video with audio and is now available in the Gemini app for Google AI Ultra subscribers in the U.S., as well as in Vertex AI.blog.googleGoogle I/O 2025: 100 things Google announced39. Try it now! Imagen 4 is our latest Imagen model, and it has remarkable clarity in fine details like skin, fur and intricate textures, and excels in both photorealistic and abstract styles. Imagen 4 is available today in the Gemini app.blog.googleGoogle I/O 2025: 100 things Google announced43. It is also significantly better at spelling and typography, making it easier to create your own greeting cards, posters and even comics.blog.googleGoogle I/O 2025: 100 things Google announced41. Soon, Imagen 4 will be available in a Fast version that’s up to 10x faster than Imagen 3.blog.googleGoogle I/O 2025: 100 things Google announced44. Try it now! Flow is our new AI filmmaking tool. Using Google DeepMind’s best-in-class models, Flow lets you weave cinematic films with control of characters, scenes and styles, so more people than ever can create visually striking movies with AI.blog.googleGoogle I/O 2025: 100 things Google announcedvisually striking movies with AI.blog.googleGoogle I/O 2025: 100 things Google announced49. We announced a partnership between Google DeepMind and Primordial Soup, a new venture dedicated to storytelling innovation founded by pioneering director Darren Aronofsky. Primordial Soup is producing three short films using Google DeepMind’s generative AI models, tools and capabilities, including Veo.blog.googleGoogle I/O 2025: 100 things Google announced32. We introduced Google AI Ultra, a new AI subscription plan with the highest usage limits and access to our most capable models and premium features, plus 30 TB of storage and access to YouTube Premium.blog.googleGoogle I/O 2025: 100 things Google announcedpremium features, plus 30 TB of storage and access to YouTube Premium.blog.googleGoogle I/O 2025: 100 things Google announcedcountries are coming soon.blog.googleGoogle I/O 2025: 100 things Google announced50% off for your first three months.developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers BlogBuilding experiences with generative AI: Generative AI enhances apps by making them intelligent, personalized, and agentic. We announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks. We showcased an AI sample app, Androidify, which lets you create an Android robot of yourself using a selfie. Discover how Androidify is built, and read the developer documentation to get started.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsThat’s the vision driving our latest advancements in Android XR. It’s the first Android platform built in the Gemini era, and it powers an ecosystem of headsets, glasses and everything in between.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsWhen we introduced Android XR alongside Samsung and Qualcomm, we shared how headsets, like Samsung’s Project Moohan (coming later this year!), will offer immersive experiences on an infinite screen. Gemini makes Android XR headsets easier to use and more powerful byblog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsAndroid XR, we’re taking a giant leap forward.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsToday’s sneak peek showed how Android XR glasses will work in real-world scenarios, including messaging friends, making appointments, asking for turn-by- turn directions, taking photos and more. We even demoed live language translation between two people, showing the potential for these glasses to break down language barriers — giving you subtitles for the real world.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsscenarios, including messaging friends, making appointments, asking for turn-by- turn directions, taking photos and more. We even demoed live language translation between two people, showing the potential for these glasses to break down language barriers — giving you subtitles for the real world.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsEquipped with a camera, microphones and speakers, these glasses work in tandem with your phone, giving you access to your apps without ever having to reach in your pocket. And an optional in-lens display privately provides helpful information right when you need it. Pairing these glasses with Gemini means they see and hear what you do, so they understand your context, remember what’s important to you and can help you throughout your day.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsSee translations in real time to help you in new places.blog.googleGoogle I/O 2025: 100 things Google announced62. Android XR prototype glasses are now in the hands of trusted testers, who are helping us make sure we’re building a truly assistive product and doing so in a way that respects privacy for you and those around you.blog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsBuilding glasses you’ll want to wearblog.googleGoogle I/O 2025: Gemini on Android XR coming to glasses, headsetsmatch at L362 As part of this work, we’re also advancing our partnership with Samsung to go beyond headsets and extend Android XR to glasses. Together, we’re creating a software and reference hardware platform that will enable the ecosystem to make great glasses. Developers will be able to start building for this platform laterblog.googleGoogle I/O 2025: 100 things Google announced63. Plus we’re partnering with innovative eyewear brands, starting with Gentle Monster and Warby Parker, to create glasses with Android XR that you’ll want to wear all day.developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers BlogBuilding excellent apps adaptively across 500 million devices: Mobile Android apps form the foundation across phones, foldables, tablets, and ChromeOS, and this year we’re helping you bring them to cars and Android XR. You can also take advantage of Material 3 Expressive to help make your apps shine.android-developers.googleblog.comAndroid Developers Blog: I/O 2025: What’s new in Google PlayHelping you succeed every step of the wayandroid-developers.googleblog.comAndroid Developers Blog: I/O 2025: What’s new in Google PlayBuilding on these updates, we’ve launched dedicated overview pages for two developer objectives: Test and release and Monitor and improve. These new pages bring together more objective-related metrics, relevant features, and a “Take action” section with contextual, dynamic advice. Overview pages for Grow and Monetize will be coming soon.android-developers.googleblog.comAndroid Developers Blog: I/O 2025: What’s new in Google PlayHalt fully-rolled out releases when neededandroid-developers.googleblog.comAndroid Developers Blog: I/O 2025: What’s new in Google PlayHistorically, a release at 100% live meant there was no turning back, leaving users stuck with a flawed version until a new update rolled out. Soon, you’ll be able to halt fully-live releases, through Play Console and the Publishing API to stop the distribution of problematic versions to new users.android-developers.googleblog.comAndroid Developers Blog: I/O 2025: What’s new in Google PlayAt this year’s Google I/O, we unveiled the latest ways we’re empowering your success with new tools that provide robust testing and actionable insights. We also showcased how we’re continuing to build a content-rich Play Store that fosters repeat engagement alongside new subscription capabilities that streamline checkout and reduce churn.blog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable valueImage: Pixel 9a in Obsidian, Porcelain, Peony and Irisblog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable valuePixel 9a has the best camera under $500 2 so you can shoot stunning photos and videos. Its upgraded dual rear camera system has both a 13MP ultrawide camera and a 48MP main camera. And with Macro Focus on an A-series for the first time, you can capture all the details.blog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable valuePixel 9a is also equipped with AI-powered photography features, like:blog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable value* Add Me , launched with our Pixel 9 series, is also on A-series for the first time. This feature combines two group photos into one, so everyone makes it into the photo — even the photographer. * 68 lets you create the perfect group photo by seamlessly blending facial expressions from a series of photos into one. * Magic Editor with Auto Frame can automatically reframe your photo, new or old. It suggests the best crop and even expands your image to get more of the scene. You can also Reimagine photos in Magic Editor, like adding fall leavesblog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable valueHelp from Gemini at your fingertipsblog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable valueThe Pixel A-series is the only smartphone line featuring Gemini Nano at its price point of under $500. And with Gemini built in, Pixel 9a’s personal AI assistant can help you with just about anything. Gemini on Pixel 9a works with Google apps like Maps, Calendar and YouTube so multitasking is made easy.blog.googlePixel 9a: The latest A-series phone with Google AI smarts at an unbeatable valueWith over 30 hours of battery life 3 and up to 100 hours 71 with Extreme Battery Saver, Pixel 9a has the best battery life of any Pixel available today. 5 It also comes with seven years of OS updates, security updates and Pixel Drops, and its upgraded IP68 water and dust resistance means it can withstand spills, drops and dings. 73 It’s our most durable A-series phone yet.leikeji.com谷歌I/O的AI新叙事:从大模型到一站式服务,AI与XR会师 – 雷科技谷歌与三星联手开发的安卓XR平台去年项目公布后,已获得了数百家软件开发商的支持。I/O开发者大会上,基于该平台的首款XR设备——三星Project Moohan亮相。该产品类似苹果Vision Pro,搭载骁龙XR2 Plus Gen 2芯片,无需连接PC或其他设备,能够独立运行,将于今年晚些时候发售。blog.googleGoogle I/O 2025: 100 things Google announcedCommunicate better, in near real timeblog.googleGoogle I/O 2025: 100 things Google announcedtwo people were in the same room. Now, it’s evolving into a new platform called Google Beam.blog.googleGoogle I/O 2025: 100 things Google announced67. You’ll even see the first Google Beam products from HP at InfoComm in a few weeks.blog.googleGoogle I/O 2025: 100 things Google announcedbusinesses and organizations worldwide.blog.googleGoogle I/O 2025: 100 things Google announced86. Try it now! We announced that Gemini Code Assist for individuals and Gemini Code Assist for GitHub are generally available, and developers can get started in less than a minute. Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist, features advanced coding performance; and helps developers excel at tasks like creating visually compelling web apps, along with code transformation and editing.developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers BlogOur async code agent, Jules, is now in public beta: Jules is a parallel, asynchronous coding agent that works directly with your GitHub repositories. You can ask Jules to take on tasks such as version upgrades, writing tests, updating features, and bug fixes, to name a few. It spins up a Cloud VM, makes coordinated edits across your codebase, runs tests, and you can open a pull request from its branch when you’re happy with the code.developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers BlogOur async code agent, Jules, is now in public beta: Jules is a parallel, asynchronous coding agent that works directly with your GitHub repositories. You can ask Jules to take on tasks such as version upgrades, writing tests, updating features, and bug fixes, to name a few. It spins up a Cloud VM, makes coordinated edits across your codebase, runs tests, and you can open a pull request from its branch when you’re happy with the code.developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers Blogexport your creations to CSS/HTML or Figma to keep working. Try Stitch for UI design.blog.googleGoogle I/O 2025: 100 things Google announced83. Flutter 3.32 has new features designed to accelerate development and enhance apps.medium.comDart & Flutter momentum at Google I/O 2025 | by Michael ThomsenGoogle Cloud: The Google Cloud team took their existing iOS/Android apps, and switched to Flutter for building a range of new features like …developers.googleblog.comWhat you should know from the Google I/O 2025 Developer keynote – Google Developers BlogGemini in Android Studio – AI agents to help you work: Gemini in Android Studio is the AI-powered coding companion that makes developers more productive at every stage of the dev lifecycle. We previewed Journeys, an agentic experience that helps with writing and executing end-to-end tests. We also previewed the Version Upgrade Agent which helps update dependencies. Learn more about how these agentic experiences in Gemini in Android Studio can help you build better apps, faster.blog.googleGoogle I/O 2025: 100 things Google announced80. Try it now! We announced Journeys in Android Studio, which lets developers test critical user journeys using Gemini by describing test steps in natural language.blog.googleGoogle I/O 2025: 100 things Google announceddevelopers.googleblog.com , which lets developers test critical user journeys using Gemini by describing test steps in natural language.blog.googleGoogle I/O 2025: 100 things Google announced71. We’re releasing new previews for text-to-speech in 2.5 Pro and 2.5 Flash. These have first-of-its-kind support for multiple speakers, enabling text-to-speech with two voices via native audio out. Like Native Audio dialogue, text-to-speech is expressive, and can capture really subtle nuances, such as whispers. It works in over 24 languages and seamlessly switches between them.blog.googleGoogle I/O 2025: 100 things Google announcedtext-to-speech with two voices via native audio out. Like Native Audio dialogue, text-to-speech is expressive, and can capture really subtle nuances, such as whispers. It works in over 24 languages and seamlessly switches between them.blog.googleGoogle I/O 2025: News and announcementsAnnouncing Gemma 3n Preview: powerful, efficient, mobile-first AI The latest Google open model for accessible AI, featuring unique flexibility, privacy, and expanded multimodal capabilities on mobile.blog.googleGoogle I/O 2025: 100 things Google announcedcoding tasks at the same time, and even get an audio overview of all the recent updates to your codebase.blog.googleGoogle I/O 2025: 100 things Google announcedfor Deaf and Hard of Hearing users.blog.googleGoogle I/O 2025: 100 things Google announcedright on Search. But if you want to get access right away, opt in via Labs. 2. For questions where you want an even more thorough response, we’re bringing deep research capabilities into AI Mode in Labs, with Deep Search. 3. Live capabilities from Project Astra are coming to AI Mode in Labs. With Search Live, coming this summer, you can talk back-and-forth with Search about what you see in real-time, using your camera. 4. We’re also bringing agentic capabilities from Project Mariner to AIblog.googleGoogle I/O 2025: 100 things Google announced2. For questions where you want an even more thorough response, we’re bringing deep research capabilities into AI Mode in Labs, with Deep Search. 3. Live capabilities from Project Astra are coming to AI Mode in Labs. With Search Live, coming this summer, you can talk back-and-forth with Search about what you see in real-time, using your camera. 4. We’re also bringing agentic capabilities from Project Mariner to AI Mode in Labs, starting with event tickets, restaurant reservations and localblog.googleGoogle I/O 2025: 100 things Google announced4. We’re also bringing agentic capabilities from Project Mariner to AI Mode in Labs, starting with event tickets, restaurant reservations and local appointments. 5. Coming soon: When you need some extra help crunching numbers or visualizing data, AI Mode in Labs will 67 that bring them to life, all custom built for your query. We’ll bring this to sports and finance queries. 6. We’re introducing a new AI Mode shopping experience that brings together advanced AI capabilities with our Shopping Graph to help you browse forblog.googleGoogle I/O 2025: 100 things Google announcedterritories. That means Google Search is bringing generative AI to more people than any other product in the world. 10. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews. 11. And starting this week, 72 for both AI Mode and AI Overviews in the U.S.blog.googleGoogle I/O 2025: 100 things Google announcedinspiration, think through considerations and find the right product for you. 7. Try it now! You can virtually try on billions of apparel listings just by uploading a photo of yourself. Our “try on” experiment is rolling out to Search Labs users in the U.S. starting today — opt in to try it out now. 8. We also showed off a new agentic checkout to help you buy at a price that fits your budget with ease. Just tap “track price” on any product listing, set what you want to spend and we’ll let you know if the price drops. 9. We shared some 72 on AI Overviews: Since last year’s I/O, AIblog.googleGoogle I/O 2025: 100 things Google announcedthat bring them to life, all custom built for your query. We’ll bring this to sports and finance queries. 6. We’re introducing a new AI Mode shopping experience that brings together advanced AI capabilities with our Shopping Graph to help you browse for inspiration, think through considerations and find the right product for you. 7. Try it now! You can 69 just by uploading a photo of yourself. Our “try on” experiment is rolling out to Search Labs users in the U.S. starting today — opt in to try it out now.blog.googleGoogle I/O 2025: 100 things Google announcedby uploading a photo of yourself. Our “try on” experiment is rolling out to Search Labs users in the U.S. starting today — opt in to try it out now. 8. We also showed off a new agentic checkout to help you buy at a price that fits your budget with ease. Just tap “track price” on any product listing, set what you want to spend and we’ll let you know if the price drops.blog.googleGoogle I/O 2025: 100 things Google announcedGemini will generate questions.blog.googleGoogle I/O 2025: 100 things Google announcedthat incorporate your own context and tone. They’ll pull from your past emails and files in your Drive to draft a response, while also matching your typical tone so your replies sound like you. Try it yourself later this year.blog.googleGoogle I/O 2025: 100 things Google announced67. You’ll even see the first Google Beam products from HP at InfoComm in a few weeks.blog.googleGoogle I/O 2025: 100 things Google announced92. Try it now! Starting today, we’re making the NotebookLM app available on Play Store and App Store, to help users take Audio Overviews on the go.blog.googleGoogle I/O 2025: 100 things Google announcedon Play Store and App Store, to help users take Audio Overviews on the go.blog.googleGoogle I/O 2025: 100 things Google announcedyou prefer a quick overview or a deeper exploration.blog.googleGoogle I/O 2025: 100 things Google announced51. To make it easier for people and organizations to detect AI-generated content, we announced SynthID Detector, a verification portal that helps to quickly and efficiently identify content that is watermarked with SynthID.blog.googleGoogle I/O 2025: 100 things Google announced52. And since launch, SynthID has already watermarked over 10 billion pieces of content.blog.googleGoogle I/O 2025: 100 things Google announcedcountries are coming soon.blog.googleGoogle I/O 2025: 100 things Google announcednew camera controls, outpainting and object add and remove.blog.googleGoogle I/O 2025: 100 things Google announcedlaunched Firebase Studio and Firebase AI Logic, which enables developers to integrate AI into their apps faster.blog.googleGoogle I/O 2025: 100 things Google announced84. And we shared updates for our Agent Development Kit (ADK), the Vertex AI Agent Engine, and our Agent2Agent (A2A) protocol, which enables interactions between multiple agents.blog.googleGoogle I/O 2025: 100 things Google announced96. Our new Labs experiment Sparkify helps you turn your questions into a short animated video, made possible by our latest Gemini and Veo models. These capabilities will be coming to Google products later this year, but in the meantime you can sign up for the waitlist for a chance to try it out.blog.googleGoogle I/O 2025: 100 things Google announced69. Over 7 million developers are building with Gemini, five times more than this time last year.