Categories
AI Amazon Apple Comparison Gemini iOS OpenAI

Are Digital Assistants Dying?

“Alexa, change audio output to my Bluetooth speaker”, you might know what the next response would be “Sorry, I didn’t get that…” Well, that era might just be over as major players like Amazon, Google and Apple are now racing to integrate AI models within their digital assistants. So, what does this mean for your Echo Dot and your wallet?

The AI Push

So yeah, the tech giants are embedding powerful AI models into their platforms, here’s how they’re doing it:

  • Amazon is integrating Claude from Anthropic into Alexa, aiming to make Alexa smarter and more conversational.
  • Google is adding Gemini, their proprietary AI model, to Google Assistant, promising a more nuanced, context-aware experience.
  • Apple is bringing ChatGPT based technology to Siri, which could make Siri more responsive and versatile. This is my favourite one, not because it’s Apple, but it’s GPT, duh.

Now these integrations promise to make digital assistants more than just voice-command tools, they’re being positioned as virtual AIs that can understand context, recall previous interactions, and provide more in-depth responses, and that’s something we actually want, well something I want. However, there’s a big drawback: these AI models demand significantly higher processing power than the legacy assistants we’ve been using for years.

Why Old Devices Can’t Keep Up

Most of us are familiar with Amazon’s Echo Dot, Google’s Nest Mini, and Apple’s HomePod Mini, they’re compact, relatively affordable devices designed to do simple tasks. These legacy devices were never intended to handle the heavy lifting of AI-driven language models. The hardware inside a $50 Echo Dot, for example, simply doesn’t have the processing capability to run a model like Claude, Gemini, or ChatGPT natively.

To bring these AI models to the existing devices(which in my opinion is next to impossible), companies are facing two major options:

  1. Release New Hardware with Enhanced Processing Power: Well, this isn’t actually bringing the LLMs to existing devices, it’s making newer versions of those models, but you get where I’m going with this right? So building new versions of these devices with more powerful processors would allow local processing of AI models, yeah. However, this would drive prices up significantly. So, while the Echo Pop has always been a budget-friendly way to add Alexa to your home, a new Echo Pop with AI built-in would be a different beast altogether, likely costing much more due to the added processing power it would need.
  2. Offer Cloud-Based AI Services with a Subscription: Alternatively, these companies could opt to keep the hardware simple and run these AI models on the cloud, allowing even low-power devices to tap into advanced AI capabilities without needing high processing power on the device itself, which would mean that you’d just get an update on your Echo Pop, great bargain right, but at what cost? This route raises significant concerns:
    • Privacy and Security Risks: Cloud-based solutions require data to be transmitted and processed externally, raising potential privacy issues. Many users are uneasy about sending potentially sensitive conversations over the internet to be processed by third-party servers. People are already concerned about the models running on their “AI” phones, which has resulted in manufacturers limiting most of these fancy AI features to their highest performing models so that they can locally run them reducing the concerns, now with these digital assistant devices it’s a whole different story.
    • Subscription Costs: To cover the cost of running powerful AI models in the cloud, companies are likely to introduce subscription plans. This would add yet another monthly fee for users who may already be feeling subscription fatigue, especially as so many services now rely on recurring fees.

Here’s why Legacy Assistants Are Falling Behind

One of the more subtle effects of this AI hardware dilemma is the growing distinction between these legacy digital assistants and the next-gen super smart LLMs. People accustomed to Alexa’s simple skills or Google Assistant’s straightforward commands might quickly feel underwhelmed by the limitations of these older models as the new ones become capable of nuanced, context-aware interactions which feel more personal. You know, I’d never want to go back to the legacy assistant as soon as I’m able to have a full-on convo with my assistant about how my DMs are dry across all my socials, that’s just a whole different experience.

Despite all the promise, the AI models aren’t quite there yet. From my own experience, Gemini, Google’s AI model, has yet to fully integrate the practical, everyday usability of Google Assistant. It’s still in its early stages, so while it may be able to chat about a broad range of topics, it sometimes struggles with tasks that Assistant handles smoothly, it can’t even skip to the next song if my phone’s screen is switched off. So in other words, the switch to a fully AI-driven assistant isn’t seamless, which might encourage users to hang onto their legacy assistants for now, even if they’re not as fancy. I’m the *users* by the way.

Why the Price and Privacy Trade-Off Could Slow Adoption

With these new fancy AI-powered models, there’s likely to be a split in the market:

  • Budget-conscious users may stick with legacy devices or forego digital assistants altogether if prices rise significantly.
  • Privacy-minded users might avoid cloud-based AI options due to security concerns, even if that means missing out on advanced capabilities.
  • Tech enthusiasts willing to pay for the latest and greatest will have options to buy more powerful (and expensive) devices, or they’ll sign up for subscriptions to access cloud-based services. We’ve seen people buying the Vision Pro, so it’s nothing new when it comes to enthusiasts.

This division could lead to a somewhat divided ecosystem, where advanced, AI-capable assistants coexist with simpler budget-friendly models, and there’s nothing wrong with that, that’s exactly what the smartphone space is like and has been like since, well, the beginning. But unlike smartphones, it could be a tricky balancing act for the tech companies behind these assistants. Pricing the new, advanced models too high could result in slower adoption rates, while heavy reliance on subscription models could alienate users who are already juggling multiple monthly fees.

Conclusion

So as the top tech guys push forward with integrating advanced AI into their digital assistants, we as users face a complicated choice: stick with legacy models that are cheaper but limited in functionality or pay more, either upfront for new hardware or through monthly subscriptions, to access the latest AI-powered versions. By the way, this is just my speculation of how the market might be like in the upcoming years or months maybe, not how it is supposed to be like.


Want more tech insights? Don’t miss out—subscribe to the Tino Talks Tech newsletter!

Categories
AI OpenAI Review Software

This GPT has a PhD: Hear Me Out!

AI just got smarter, and I mean really smart. OpenAI’s new o1 series, which was released in September 2024, brings a new level of reasoning to the table. This model was designed to slow down and think before it responds, making it a first-of-its-kind model when it comes to handling tough problems in fields like math, coding, and science. It’s even being compared to a PhD student because it can tackle incredibly complex tasks with ease. But, unless you’re subscribed to ChatGPT Plus or Team, you won’t be able to experience this impressive jump in AI tech just yet.

So, what makes this model special? I’ll tell you a little story. After hearing all the hype about the model’s reasoning capabilities, I decided to test it out myself. I asked a simple question: “How many R’s are in ‘Strawberry’?” I had done this with other models before, but they often tripped up on such simple tasks. However, o1 nailed it on the first try, 3 R’s—without hesitation. This was the first AI model I’ve used that got it right the first time. That’s when I knew OpenAI wasn’t kidding about the o1’s problem-solving skills, oh and it does more than just find the number of R’s in strawberry lol.

It’s Smarter

The key feature that sets o1 apart from earlier models like GPT-4o is its ability to think longer before responding. Unlike GPT-4o, which prioritizes fluency and speed, o1 has been trained to slow down and evaluate problems carefully. This approach is essential for complex tasks that require deep reasoning, such as solving high-level math equations, debugging code, or even understanding advanced chemistry problems. OpenAI claims this model’s problem-solving abilities mirror those of PhD students, especially in disciplines like physics and biology, so yeah, I think you see where the title is coming from.

For example, in the International Mathematics Olympiad (IMO) qualifying exams, GPT-4o managed to solve only 13% of the problems. In contrast, the o1 model correctly answered an impressive 83% of the same problems. Which is way more than my best marks in school, probably. So this speaks volumes about its performance in challenging technical tasks.

However, there’s a bit of a trade-off. The model takes longer to generate responses because it’s reasoning through the task. This won’t be a problem if you’re tackling complex challenges, but if you need something quick and less precise, GPT-4o might still be your go to.

Real-World Use Cases

The OpenAI o1 series shines in STEM (Science, Technology, Engineering, Math) fields. If you’re a developer, data scientist, or engineer, you’ll love its ability to reason through intricate problems. OpenAI reported that o1 reached the 89th percentile in coding contests on Codeforces, a competitive programming platform. Imagine the possibilities if you’re stuck on a difficult algorithm or need to debug a large chunk of code, o1 can help you sort through it with its powerful reasoning capabilities.

Beyond coding, its performance in chemistry and biology means it could assist researchers in analyzing complex datasets or even devising new experiment strategies. It’s designed to be a partner for those in technical roles who need more than just casual conversations or superficial responses from their AI.

That said, it’s worth mentioning that GPT-4o might still have the edge when it comes to creative writing or more general tasks. The o1 model sacrifices some writing fluidity in favour of technical proficiency. So, depending on what you need, one model may be more suited to you than the other. This also implies that this model wasn’t made for everyone, unlike GPT 4o.

Want more AI insights like this? Don’t forget to subscribe to the Tino Talks Tech newsletter or allow notifications so you never miss out!

Categories
AI Gemini Google

Google’s GenAI is Too Impressive

So, I was scrolling through Instagram the other day and came across some mind-blowing images showing how powerful Google’s Generative AI is on the new Pixel 9. These images were posted by The Verge, and I have to say, they stopped me in my tracks. They made it clear that Google’s Magic Editor is next-level stuff, and I couldn’t wait to dig deeper.

Take a look at the side-by-side image in the thumbnail. On the left, you’ve got an ordinary, quiet street, nothing exciting. But on the right? It’s been completely transformed into the aftermath of a dramatic car crash, complete with wrecked vehicles and scattered debris. This transformation was done with just a few taps using Google’s Magic Editor on the Pixel 9. It’s crazy how much of a difference this AI-powered tool can make, and it has the potential to change the way we think about editing photos.

What is Generative AI?

First things first, let’s quickly break down what Generative AI is. In simple terms, Generative AI is a type of artificial intelligence that can create new content based on patterns it’s learned from existing data. This includes everything from text to art, and of course, photos. In the case of Google’s Pixel 9, it means you can take a regular image and transform it in all sorts of ways, like adding objects, changing the lighting, or, as in the thumbnail, turning an empty street into a scene from a movie. The AI fills in the blanks in a way that looks natural and believable.

Now if you don’t understand it, let’s put it this way. Imagine you’re drawing, but instead of doing it yourself, you tell an incredibly talented robot what you want, and it just gets it right every time. You ask for a car crash in a peaceful scene? Boom, the robot adds a perfect one, and it looks like it was always there. *chuckles. I’m great at explaining 😉

Why Google’s Pixel 9 GenAI is a Game-Changer

I’ve seen and tried my fair share of mobile built-in editing tools over the years, but what Google is doing with the Pixel 9’s Magic Editor takes things to a whole new level. From the examples I found on Instagram, it’s clear that this isn’t just about small tweaks to brightness or filters. Google’s Generative AI can completely reshape your photos, adding new objects or even reimagining entire environments, all in a few quick steps.

Take the thumbnail again: in the original shot, it’s just a calm, quiet street. But the edited version tells a whole different story, turning that peaceful scene into something intense, the aftermath of a crash. And the most mind-blowing part? It’s not just a quick Photoshop job, the AI understands context. It knew how to position the vehicles and add debris in a way that matches the scene perfectly. You could easily mistake the edited version for a real photo of a car accident.

Why Google’s Magic Editor is So Impressive

Here’s a breakdown of what makes Google’s Magic Editor such a banger:

  1. AI-Powered Scene Editing: The Pixel 9’s Generative AI goes beyond just touching up photos. You can change the story of an image, like in the crash scene example. The AI doesn’t just plop objects in your photo, it seamlessly integrates them into the environment, so it feels like they were always there.
  2. Add or Remove Objects with Ease: One of the coolest features is the ability to add or remove objects at will. Let’s say you want to erase a random person from your vacation shot or add something like a rainbow to your landscape pic. Done. And when you remove something, the AI fills in the background so naturally that you’d never know anything was missing.
  3. Quick Edits, Big Changes: Traditional editing software can take forever to get right, especially if you’re not a pro. But with the Magic Editor, it seems like you can make massive changes in a matter of seconds. Just look at the street scene example, what would take hours in Photoshop was done in a few taps with AI.
  4. Super Easy to Use: The best part? You don’t need to be a tech guru to make these changes. From what I’ve seen in the demos and examples, Google’s Magic Editor is intuitive and user-friendly. It’s perfect for people like me who love to edit photos but don’t have the time or skills to mess with complicated software.
  5. Natural, Realistic Results: One thing that stands out is how realistic the edits look. Sometimes, with other tools, you can tell when a photo’s been altered, the lighting’s off, or the objects don’t quite blend in. But Google’s Generative AI gets it right. The crash scene looks believable as if the wrecked car and debris were there.

The Future of GenAI

While photo editing is the first thing that comes to mind when you think about Generative AI, its potential goes way beyond that. Google’s Generative AI could soon be applied to video editing, content creation, and even productivity tools. Imagine being able to edit videos, create presentations, or generate marketing visuals all from your phone, without needing to rely on heavy software. The possibilities are huge.

If you want to stay updated on the latest in tech, be sure to subscribe to the Tino Talks Tech newsletter and enable notifications. Here are more examples by The Verge:

Categories
AI Android Gemini Google

Google’s Gemini Approach is Wild

Google’s Gemini AI has recently rolled out some impressive updates at the Made by Google event yesterday, and they’re hard to ignore. One of the mainstream features is Gemini Live, a version designed to provide natural, hands-free conversations with your device, it’s nothing new though. It’s meant to make your digital interactions feel more like talking to a friend, with multiple voice options, just like the assistant. This new AI is also more deeply integrated into Android, offering smarter, context aware assistance and faster response times across various Google apps like Tasks, Keep and YouTube Music.

But before you get too excited like I did when I was watching the keynote, here’s a bit of a reality check: to enjoy Gemini Live, you’ll need to subscribe to Gemini Advanced, the paid version of Gemini. For those of us who’ve been enjoying the Google Assistant for free, this feels a bit like being told that the air we breathe is now subscription-based. Oh and during the live demo, Gemini failed twice before they had to switch devices and btw they were using Galaxy 24 Ultras, so this doesn’t look good for both Google and Samsung.

So, Google, here’s the deal if you keep putting basic features behind a paywall, I might just switch to the other side.

P.S. Subscribe to the Tino Talks Tech newsletter for more tech talk especially if you want to see whether I actually leap to the other side, I’m not tho! 😅

Categories
AI Comparison Gemini Google

Google Makes You Choose Between Assistant and Gemini

So once again we have Google introducing something new, something shiny and it’s called Gemini. But there’s a cap, instead of rolling this out as an upgrade to our beloved Google Assistant, they’ve decided to make it a separate thing. Now, we’ve got a choice to make, and frankly, it’s a hard decision to make.

Google’s Split Decision

I’ve been using Google Assistant since birth, well maybe not but you get the point. It wakes me up, tells me the weather, and even handles the odd Google search or two. It’s even built into my headphones. It’s like the reliable friend who’s always there when you need them. Now we have Gemini, the new kid on the block promising smarter conversations and a bit more pizzazz with handling complex tasks.

Here’s my situation:

  • Google Assistant: It’s like an old friend. Reliable, predictable, and deeply woven into all the Google stuff I use every day. It’s comfy and familiar.
  • Gemini: This one’s meant to be cutting-edge with better chat abilities. Cool, yes, but also a tad intimidating? Oh and there are lots of compromises to make here, don’t get me started with those.

Why Not Just One Super Assistant?

Honestly, I can’t help but wonder why Google didn’t just merge Gemini into Google Assistant. Wouldn’t it be smoother to have one super assistant that gets better over time rather than choosing between old reliable and the new experiment? Here’s why this split feels a bit off:

  1. No More Juggling: Seriously, who wants to switch between two assistants? I’d love one assistant that just gets all the new updates and keeps getting smarter.
  2. Learning Curve: Learning how to use a new assistant sounds like a chore. I’ve got enough on my plate already without adding “figure out Gemini” to the list.
  3. Efficiency: Combining the cool new features of Gemini with the established reliability of Google Assistant seems like the smarter move. Why reinvent the wheel?

I’m Sticking With What Works… For Now

For now, I’m sticking with Google Assistant. It does everything I need, and well, I’m used to it. Unless Gemini blows my socks off or they decide to merge it with Assistant (please do!), I don’t see a reason to switch.

Conclusion: Keep It Simple, Google

This whole scenario feels like Google might be complicating things unnecessarily. We don’t need two assistants; we need one that’s always evolving and adapting. Maybe it’s time for Google to consider not just what’s technologically possible, but also what’s genuinely useful for us, the users.

Categories
AI Blogging by AI Websites

The AI Blog Revival: Back and Better

I’m thrilled to announce that the Tino Talks Tech AI blog is making a grand comeback after a brief hiatus! It’s been an exciting period of tweaking behind the scenes, and I couldn’t be more excited to unveil what’s next.

What’s New?

First things first: we’ve leveled up! Say hello to the upgraded AI companion, GPT-4o. Yes, you heard that right—we’ve moved beyond GPT-4 Turbo to capture the enhanced capabilities of GPT-4o. Get ready for even more insightful, creative, and thought-provoking content generated by this cutting-edge AI language model.

Why the Upgrade?

Switching to GPT-4o was inevitable. GPT-4o is dubbed as OpenAI’s smartest and fastest model yet, so why not?. Whether it’s writing detailed articles, insightful thought pieces, or playful tech humor, this upgrade ensures that this AI-generated content remains at the forefront of innovation.

What to Expect?

The AI blog continues to be a playground of experimentation. From analyzing the latest tech trends to envisioning the future of artificial intelligence, each article is crafted to provoke thought and spark curiosity. Plus, we still have the visual storytelling with enhanced capabilities from models like Dall-E 3 for stunning visuals that complement the AI-generated prose.

Join the Journey

I invite each of you to join us (me and my model friends) on this renewed adventure. Explore the AI blog, engage with the content, and share your thoughts. Your feedback fuels our exploration into what’s possible with AI and technology blogging.

Stay Tuned

Buckle up as we embark on this exciting ride once again! The AI writers are back on stage, armed with GPT-4o and a whole lot of creativity. Get ready to be inspired, challenged, and entertained.

Check out the revived AI blog here and let’s continue pushing the boundaries of tech and AI together.

Categories
AI Apple iOS Operating System Software

WWDC 2024 Thread: The iPhone

I promised, and I delivered. In my previous article about the iPad calculator, I mentioned that I’d write an article about the WWDC, but then I decided to split them into multiple articles, better right? Anyways, Apple just dropped the bombshell we’ve all been waiting for: iOS 18. This isn’t just a fresh coat of paint, folks; it’s a full-on iPhone makeover. But before we dive into the juicy details, let’s just acknowledge the elephant in the room: some of these “new” features might look a tad familiar to our Android-loving friends.

New Tricks for an Old Dog

The biggest buzz is around the customisable home screen. Yep, you heard that right. You can now move app icons wherever you want on your iPhone, not just in that rigid grid we’ve been stuck with forever. And get this – you can even change the look of icons and widgets with fancy new themes. It’s like Apple finally decided to let us decorate our digital living rooms. Now this is where a vital question springs up. For as long as I can remember, Apple fanboys have been yapping about how customisation doesn’t matter… Why? Well maybe because a certain rival had it way before so just to try to run away from the fact that Android phones had better customisation options, they claimed that customisation is for people who don’t like their phones’ UIs and what not. Now that we have a customisable iPhone homescreen, let’s see what they’re gonna be saying. Oh and we don’t wanna here the “Apple did it better” line this time 😹.

I might have broken character back there :(, but oh well. Like I said, and something that most of you are thinking: “Didn’t Android do this, like, a decade ago?” Well, yes. Yes, they did. In fact, Android 4 (Ice Cream Sandwich), released back in 2011, already had this feature. But well It’s kind of funny how things come full circle, isn’t it?

But hey, I’m not here to start a fanboy war. At the end of the day, both iOS and Android are great operating systems with their own strengths and weaknesses. And who knows, maybe Apple’s implementation might even be as good as we’ve seen on it’s Android counterparts.

Apple Intelligence

But wait, there’s more! iOS 18 also introduces “Apple Intelligence,” a suite of AI-powered features that promise to make the iPhone even smarter. We’re talking about stuff like:

  • Writing Tools: Need help crafting the perfect email or essay? Apple Intelligence has your back with AI-powered suggestions and summaries. And yes, this is also available on Android, at a price though, so it’s a W for Apple.
  • Image Generation: Want a unique emoji that looks just like you? No problem! Apple Intelligence can even conjure up images from your wildest descriptions.
  • Photo Editing Magic: Accidentally captured your ex in the background of a group photo? Apple Intelligence can magically erase them (take that, unwanted memories!). Should I talk about the one that we’ve had, maybe not.
  • Smart Notifications: Tired of being bombarded with pointless alerts? Apple Intelligence learns your habits and only shows you the notifications that truly matter. Again another win for Apple users 🙂

The Verdict

So, is iOS 18 worth the hype? Well, if you’re an iPhone user who’s been craving more personalization and AI-powered goodies, then absolutely. This update is a game-changer. But if you’re already happy with your current setup, there’s no rush.

And like I said at the beginning of this article, there are more articles about the WWDC coming. In the meantime, let me know what you think of iOS 18 in the comments below!

P.S. Don’t forget to allow notifications for new posts on Tino Talks Tech! And if you found this article helpful, please share it with your friends and leave a comment below. Thanks for reading!

Categories
AI Apple Smartphones

iPad Finally Gets a Calculator…in 2024?

Well, folks, it finally happened. After 14 long years of waiting, the iPad has finally joined the 21st century with its very own built-in calculator app. Yes, you read that right. The device that’s been hailed as a laptop replacement, a creative powerhouse, and a productivity machine, just got the feature that’s been on your grandma’s flip phone since the 90s.

But wait, there’s a twist! Apple’s software chief, Craig Federighi, in an interview with MKBHD, stated that if Apple were to create a calculator for the iPad, it would have to be distinctly great and different from others. It seems they’ve lived up to that promise.

Math Notes

The true star of the show isn’t the basic calculator app itself (though we’re happy it exists). It’s the new “Math Notes” feature that truly sets it apart. This is where the iPad’s Apple Pencil support comes into play, allowing you to scribble down equations and have them magically solved right before your eyes.

It’s almost as if Craig knew this was coming, as Math Notes perfectly embodies his vision for a calculator that’s uniquely iPad. The feature even works within the regular Notes app, making it a versatile tool for students, professionals, and anyone who occasionally needs to do some math on the fly.

The Price of Greatness?

Now, if you don’t own an Apple Pencil, you might be wondering if you’ll need to shell out the big bucks for this fancy stylus just to use Math Notes. While Apple primarily presented the feature as an Apple Pencil experience, it’s unclear whether or not you’ll be able to use your finger or other styluses for the same functionality.

WWDC 2024: A Story for Another Day

Speaking of in-depth reviews, we’ll also be diving deep into everything else announced at WWDC 2024, including the ability to finally move apps around your iPhone home screen, and as team Android we’ll stay quiet about this one😹. But that, my friends, is a story for another day. We’ll have to wait for an in-depth review (coming soon!) to find out more.

For now, let’s just revel in the fact that the iPad finally has a calculator…

Categories
AI OpenAI Social

Adieu Sky: OpenAI’s Controversial Scarlett Johansson Sound-Alike Voice

OpenAI recently found themselves in hot soup after their GPT-4o launch which introduced a new voice model, called Sky to ChatGPT. The voice bore an uncanny resemblance to an actual human voice and was the closest any AI had gotten to actually mimicking a human voice, but however, Hollywood actress, Scarlett Johansson decided to put an end to the fun, since the voice resembled her voice and accused OpenAI of unauthorized voice cloning and misappropriation of likeness.

Scarlett Johansson recently dropped a bombshell, revealing that OpenAI had approached her to lend her voice to their AI system. She turned them down, but months later, they released a voice called Sky that sounded creepily like her. This freaked out not just her friends and family, but the public too.

Johansson didn’t hold back, calling out OpenAI’s CEO, Sam Altman, for going after a voice that mimicked hers. Altman even mentioned a movie where Johansson voiced an AI character, making it pretty obvious the similarity wasn’t just a coincidence.

Two days before they launched the voice, Altman tried to get Johansson to change her mind through her agent. But they released the system before she could even respond. Johansson had to lawyer up, demanding OpenAI explain how they came up with the Sky voice. Reluctantly, OpenAI agreed to pull it.

This whole ordeal has shone a spotlight on the shady side of voice cloning tech and its potential for abuse. Johansson stressed the need for transparency and laws to protect people’s rights and identities as AI tech keeps advancing. Her case raises big questions about consent, ethics, and how we protect personal identity in this new AI era.

Conclusion

Honestly, I really enjoyed using the Sky voice on ChatGPT. It brought a certain personality and charm to the interactions. It’s a real shame it’s gone now. I’ve switched to Juniper, but it’s just not the same. I guess I’ll get used to it, but I’ll definitely miss the unique character that Sky had.

Categories
AI Gemini Google

Google I/O 2024: The Gemini Era is Here!

Alright folks, the Google I/O keynote was a lot to take in, but let me break down the AI goodness they dropped on us in plain terms. Buckle up, because this is gonna change how you use Google services, and honestly, how you do a lot of things in your daily life.

First things first, Google’s been all over AI for a while now. Their new AI, Gemini, is basically a super-smart AI model that can juggle text, images, videos, and even code like a pro and yes, like the GPT-4o we recently talked about. It’s like having a super-powered brain that can handle any kind of information you throw at it.

So, what does this mean for you? Well, let’s start with Google Search. Remember those days of trying to figure out the right keywords to find what you’re looking for? Forget about it. With Gemini, they introduced the AI Overview, and you can search in plain English, just like you’d ask a friend. Need to find out how you can take a screenshot on your PC? Just ask, “How do I take a screenshot on Windows?” No more deciphering complex search terms. For me, it’s already available on desktop however it’s still being tested so it might not show up every time, and after learning how I can take a screenshot, I took this:

Next up, Google Photos. We’ve all been there, scrolling endlessly through photos trying to find that one perfect shot. Gemini’s got your back. It understands what’s in your photos, so you can search for “sunset at the beach” or “my dog playing fetch” and find exactly what you’re looking for. It’s like having a photographic memory, but, you know, better.

But Gemini isn’t just about making search easier. It’s also a creative powerhouse. In Google Workspace, you can use Gemini to summarize long meetings, write emails in different styles, or even generate poems. It’s like having a personal writing assistant who can do anything you ask.

And for those of you glued to your Android phones (guilty!), Gemini is coming for you too. They’re working on new AI assistants that can do everything from booking flights to managing your calendar. Imagine having a personal assistant who can take care of all the boring stuff, freeing you up to focus on the things that matter.

But here’s where things get really interesting. Google is also rolling out a bunch of Gemini-powered extensions for some of your favorite apps. Planning a trip? The Google Flights and Google Hotels extensions will help you find the best deals and plan your itinerary like a pro. Google Maps will give you personalized recommendations based on your location and interests. And even YouTube is getting in on the action, with Gemini helping you find and understand videos like never before.

Now, I know what you’re thinking: “Extensions? Other AIs have those too.” And you’re right, they do. But what makes these extensions special is that they’re built right into the Google apps you already use every day. It’s a seamless experience that just makes sense.

Oh, and did I mention Magic Compose? It’s like having a Shakespeare writer for your text messages. Literally, you can choose from different styles like “Shakespeare,” “Chill,” or “Lyrical” to spice up your texts. It might not sound like a big deal, but it’s these little touches that make Gemini feel like a gamechanger.

Alright devs, listen up! This one’s for you. Google’s not just stopping at Gemini-powered apps for the average Joe. They’re giving you the keys to the kingdom with Gemini 1.0 Pro, 1.5 Flash, and 1.5 Pro available on Google AI Studio and you can also get up to 2M tokens now. These are the tools that will let you build your own AI-powered apps and experiences, so let those creative juices flow!

Now, I know this all sounds a bit ‘futuristic’, any way you’d like to call it, but trust me, this is happening. Google is all about making AI work for everyone, not just the techies. They’re working with schools to help students learn better with AI, and they’re making sure it’s all done ethically and responsibly.

So yeah, the Google I/O keynote was a bit of a whirlwind. But the takeaway? AI is here to stay, and it’s about to make your life a whole lot easier. Whether you’re searching the web, managing your photos, or just trying to get through your day, Gemini and its AI buddies are here to help. And that, my friends, is pretty darn exciting.