Categories
AI Amazon Apple Comparison Gemini iOS OpenAI

Are Digital Assistants Dying?

“Alexa, change audio output to my Bluetooth speaker”, you might know what the next response would be “Sorry, I didn’t get that…” Well, that era might just be over as major players like Amazon, Google and Apple are now racing to integrate AI models within their digital assistants. So, what does this mean for your Echo Dot and your wallet?

The AI Push

So yeah, the tech giants are embedding powerful AI models into their platforms, here’s how they’re doing it:

  • Amazon is integrating Claude from Anthropic into Alexa, aiming to make Alexa smarter and more conversational.
  • Google is adding Gemini, their proprietary AI model, to Google Assistant, promising a more nuanced, context-aware experience.
  • Apple is bringing ChatGPT based technology to Siri, which could make Siri more responsive and versatile. This is my favourite one, not because it’s Apple, but it’s GPT, duh.

Now these integrations promise to make digital assistants more than just voice-command tools, they’re being positioned as virtual AIs that can understand context, recall previous interactions, and provide more in-depth responses, and that’s something we actually want, well something I want. However, there’s a big drawback: these AI models demand significantly higher processing power than the legacy assistants we’ve been using for years.

Why Old Devices Can’t Keep Up

Most of us are familiar with Amazon’s Echo Dot, Google’s Nest Mini, and Apple’s HomePod Mini, they’re compact, relatively affordable devices designed to do simple tasks. These legacy devices were never intended to handle the heavy lifting of AI-driven language models. The hardware inside a $50 Echo Dot, for example, simply doesn’t have the processing capability to run a model like Claude, Gemini, or ChatGPT natively.

To bring these AI models to the existing devices(which in my opinion is next to impossible), companies are facing two major options:

  1. Release New Hardware with Enhanced Processing Power: Well, this isn’t actually bringing the LLMs to existing devices, it’s making newer versions of those models, but you get where I’m going with this right? So building new versions of these devices with more powerful processors would allow local processing of AI models, yeah. However, this would drive prices up significantly. So, while the Echo Pop has always been a budget-friendly way to add Alexa to your home, a new Echo Pop with AI built-in would be a different beast altogether, likely costing much more due to the added processing power it would need.
  2. Offer Cloud-Based AI Services with a Subscription: Alternatively, these companies could opt to keep the hardware simple and run these AI models on the cloud, allowing even low-power devices to tap into advanced AI capabilities without needing high processing power on the device itself, which would mean that you’d just get an update on your Echo Pop, great bargain right, but at what cost? This route raises significant concerns:
    • Privacy and Security Risks: Cloud-based solutions require data to be transmitted and processed externally, raising potential privacy issues. Many users are uneasy about sending potentially sensitive conversations over the internet to be processed by third-party servers. People are already concerned about the models running on their “AI” phones, which has resulted in manufacturers limiting most of these fancy AI features to their highest performing models so that they can locally run them reducing the concerns, now with these digital assistant devices it’s a whole different story.
    • Subscription Costs: To cover the cost of running powerful AI models in the cloud, companies are likely to introduce subscription plans. This would add yet another monthly fee for users who may already be feeling subscription fatigue, especially as so many services now rely on recurring fees.

Here’s why Legacy Assistants Are Falling Behind

One of the more subtle effects of this AI hardware dilemma is the growing distinction between these legacy digital assistants and the next-gen super smart LLMs. People accustomed to Alexa’s simple skills or Google Assistant’s straightforward commands might quickly feel underwhelmed by the limitations of these older models as the new ones become capable of nuanced, context-aware interactions which feel more personal. You know, I’d never want to go back to the legacy assistant as soon as I’m able to have a full-on convo with my assistant about how my DMs are dry across all my socials, that’s just a whole different experience.

Despite all the promise, the AI models aren’t quite there yet. From my own experience, Gemini, Google’s AI model, has yet to fully integrate the practical, everyday usability of Google Assistant. It’s still in its early stages, so while it may be able to chat about a broad range of topics, it sometimes struggles with tasks that Assistant handles smoothly, it can’t even skip to the next song if my phone’s screen is switched off. So in other words, the switch to a fully AI-driven assistant isn’t seamless, which might encourage users to hang onto their legacy assistants for now, even if they’re not as fancy. I’m the *users* by the way.

Why the Price and Privacy Trade-Off Could Slow Adoption

With these new fancy AI-powered models, there’s likely to be a split in the market:

  • Budget-conscious users may stick with legacy devices or forego digital assistants altogether if prices rise significantly.
  • Privacy-minded users might avoid cloud-based AI options due to security concerns, even if that means missing out on advanced capabilities.
  • Tech enthusiasts willing to pay for the latest and greatest will have options to buy more powerful (and expensive) devices, or they’ll sign up for subscriptions to access cloud-based services. We’ve seen people buying the Vision Pro, so it’s nothing new when it comes to enthusiasts.

This division could lead to a somewhat divided ecosystem, where advanced, AI-capable assistants coexist with simpler budget-friendly models, and there’s nothing wrong with that, that’s exactly what the smartphone space is like and has been like since, well, the beginning. But unlike smartphones, it could be a tricky balancing act for the tech companies behind these assistants. Pricing the new, advanced models too high could result in slower adoption rates, while heavy reliance on subscription models could alienate users who are already juggling multiple monthly fees.

Conclusion

So as the top tech guys push forward with integrating advanced AI into their digital assistants, we as users face a complicated choice: stick with legacy models that are cheaper but limited in functionality or pay more, either upfront for new hardware or through monthly subscriptions, to access the latest AI-powered versions. By the way, this is just my speculation of how the market might be like in the upcoming years or months maybe, not how it is supposed to be like.


Want more tech insights? Don’t miss out—subscribe to the Tino Talks Tech newsletter!

Categories
AI Gemini Google

Google’s GenAI is Too Impressive

So, I was scrolling through Instagram the other day and came across some mind-blowing images showing how powerful Google’s Generative AI is on the new Pixel 9. These images were posted by The Verge, and I have to say, they stopped me in my tracks. They made it clear that Google’s Magic Editor is next-level stuff, and I couldn’t wait to dig deeper.

Take a look at the side-by-side image in the thumbnail. On the left, you’ve got an ordinary, quiet street, nothing exciting. But on the right? It’s been completely transformed into the aftermath of a dramatic car crash, complete with wrecked vehicles and scattered debris. This transformation was done with just a few taps using Google’s Magic Editor on the Pixel 9. It’s crazy how much of a difference this AI-powered tool can make, and it has the potential to change the way we think about editing photos.

What is Generative AI?

First things first, let’s quickly break down what Generative AI is. In simple terms, Generative AI is a type of artificial intelligence that can create new content based on patterns it’s learned from existing data. This includes everything from text to art, and of course, photos. In the case of Google’s Pixel 9, it means you can take a regular image and transform it in all sorts of ways, like adding objects, changing the lighting, or, as in the thumbnail, turning an empty street into a scene from a movie. The AI fills in the blanks in a way that looks natural and believable.

Now if you don’t understand it, let’s put it this way. Imagine you’re drawing, but instead of doing it yourself, you tell an incredibly talented robot what you want, and it just gets it right every time. You ask for a car crash in a peaceful scene? Boom, the robot adds a perfect one, and it looks like it was always there. *chuckles. I’m great at explaining 😉

Why Google’s Pixel 9 GenAI is a Game-Changer

I’ve seen and tried my fair share of mobile built-in editing tools over the years, but what Google is doing with the Pixel 9’s Magic Editor takes things to a whole new level. From the examples I found on Instagram, it’s clear that this isn’t just about small tweaks to brightness or filters. Google’s Generative AI can completely reshape your photos, adding new objects or even reimagining entire environments, all in a few quick steps.

Take the thumbnail again: in the original shot, it’s just a calm, quiet street. But the edited version tells a whole different story, turning that peaceful scene into something intense, the aftermath of a crash. And the most mind-blowing part? It’s not just a quick Photoshop job, the AI understands context. It knew how to position the vehicles and add debris in a way that matches the scene perfectly. You could easily mistake the edited version for a real photo of a car accident.

Why Google’s Magic Editor is So Impressive

Here’s a breakdown of what makes Google’s Magic Editor such a banger:

  1. AI-Powered Scene Editing: The Pixel 9’s Generative AI goes beyond just touching up photos. You can change the story of an image, like in the crash scene example. The AI doesn’t just plop objects in your photo, it seamlessly integrates them into the environment, so it feels like they were always there.
  2. Add or Remove Objects with Ease: One of the coolest features is the ability to add or remove objects at will. Let’s say you want to erase a random person from your vacation shot or add something like a rainbow to your landscape pic. Done. And when you remove something, the AI fills in the background so naturally that you’d never know anything was missing.
  3. Quick Edits, Big Changes: Traditional editing software can take forever to get right, especially if you’re not a pro. But with the Magic Editor, it seems like you can make massive changes in a matter of seconds. Just look at the street scene example, what would take hours in Photoshop was done in a few taps with AI.
  4. Super Easy to Use: The best part? You don’t need to be a tech guru to make these changes. From what I’ve seen in the demos and examples, Google’s Magic Editor is intuitive and user-friendly. It’s perfect for people like me who love to edit photos but don’t have the time or skills to mess with complicated software.
  5. Natural, Realistic Results: One thing that stands out is how realistic the edits look. Sometimes, with other tools, you can tell when a photo’s been altered, the lighting’s off, or the objects don’t quite blend in. But Google’s Generative AI gets it right. The crash scene looks believable as if the wrecked car and debris were there.

The Future of GenAI

While photo editing is the first thing that comes to mind when you think about Generative AI, its potential goes way beyond that. Google’s Generative AI could soon be applied to video editing, content creation, and even productivity tools. Imagine being able to edit videos, create presentations, or generate marketing visuals all from your phone, without needing to rely on heavy software. The possibilities are huge.

If you want to stay updated on the latest in tech, be sure to subscribe to the Tino Talks Tech newsletter and enable notifications. Here are more examples by The Verge:

Categories
AI Android Gemini Google

Google’s Gemini Approach is Wild

Google’s Gemini AI has recently rolled out some impressive updates at the Made by Google event yesterday, and they’re hard to ignore. One of the mainstream features is Gemini Live, a version designed to provide natural, hands-free conversations with your device, it’s nothing new though. It’s meant to make your digital interactions feel more like talking to a friend, with multiple voice options, just like the assistant. This new AI is also more deeply integrated into Android, offering smarter, context aware assistance and faster response times across various Google apps like Tasks, Keep and YouTube Music.

But before you get too excited like I did when I was watching the keynote, here’s a bit of a reality check: to enjoy Gemini Live, you’ll need to subscribe to Gemini Advanced, the paid version of Gemini. For those of us who’ve been enjoying the Google Assistant for free, this feels a bit like being told that the air we breathe is now subscription-based. Oh and during the live demo, Gemini failed twice before they had to switch devices and btw they were using Galaxy 24 Ultras, so this doesn’t look good for both Google and Samsung.

So, Google, here’s the deal if you keep putting basic features behind a paywall, I might just switch to the other side.

P.S. Subscribe to the Tino Talks Tech newsletter for more tech talk especially if you want to see whether I actually leap to the other side, I’m not tho! 😅

Categories
AI Comparison Gemini Google

Google Makes You Choose Between Assistant and Gemini

So once again we have Google introducing something new, something shiny and it’s called Gemini. But there’s a cap, instead of rolling this out as an upgrade to our beloved Google Assistant, they’ve decided to make it a separate thing. Now, we’ve got a choice to make, and frankly, it’s a hard decision to make.

Google’s Split Decision

I’ve been using Google Assistant since birth, well maybe not but you get the point. It wakes me up, tells me the weather, and even handles the odd Google search or two. It’s even built into my headphones. It’s like the reliable friend who’s always there when you need them. Now we have Gemini, the new kid on the block promising smarter conversations and a bit more pizzazz with handling complex tasks.

Here’s my situation:

  • Google Assistant: It’s like an old friend. Reliable, predictable, and deeply woven into all the Google stuff I use every day. It’s comfy and familiar.
  • Gemini: This one’s meant to be cutting-edge with better chat abilities. Cool, yes, but also a tad intimidating? Oh and there are lots of compromises to make here, don’t get me started with those.

Why Not Just One Super Assistant?

Honestly, I can’t help but wonder why Google didn’t just merge Gemini into Google Assistant. Wouldn’t it be smoother to have one super assistant that gets better over time rather than choosing between old reliable and the new experiment? Here’s why this split feels a bit off:

  1. No More Juggling: Seriously, who wants to switch between two assistants? I’d love one assistant that just gets all the new updates and keeps getting smarter.
  2. Learning Curve: Learning how to use a new assistant sounds like a chore. I’ve got enough on my plate already without adding “figure out Gemini” to the list.
  3. Efficiency: Combining the cool new features of Gemini with the established reliability of Google Assistant seems like the smarter move. Why reinvent the wheel?

I’m Sticking With What Works… For Now

For now, I’m sticking with Google Assistant. It does everything I need, and well, I’m used to it. Unless Gemini blows my socks off or they decide to merge it with Assistant (please do!), I don’t see a reason to switch.

Conclusion: Keep It Simple, Google

This whole scenario feels like Google might be complicating things unnecessarily. We don’t need two assistants; we need one that’s always evolving and adapting. Maybe it’s time for Google to consider not just what’s technologically possible, but also what’s genuinely useful for us, the users.

Categories
AI Gemini Google

Google I/O 2024: The Gemini Era is Here!

Alright folks, the Google I/O keynote was a lot to take in, but let me break down the AI goodness they dropped on us in plain terms. Buckle up, because this is gonna change how you use Google services, and honestly, how you do a lot of things in your daily life.

First things first, Google’s been all over AI for a while now. Their new AI, Gemini, is basically a super-smart AI model that can juggle text, images, videos, and even code like a pro and yes, like the GPT-4o we recently talked about. It’s like having a super-powered brain that can handle any kind of information you throw at it.

So, what does this mean for you? Well, let’s start with Google Search. Remember those days of trying to figure out the right keywords to find what you’re looking for? Forget about it. With Gemini, they introduced the AI Overview, and you can search in plain English, just like you’d ask a friend. Need to find out how you can take a screenshot on your PC? Just ask, “How do I take a screenshot on Windows?” No more deciphering complex search terms. For me, it’s already available on desktop however it’s still being tested so it might not show up every time, and after learning how I can take a screenshot, I took this:

Next up, Google Photos. We’ve all been there, scrolling endlessly through photos trying to find that one perfect shot. Gemini’s got your back. It understands what’s in your photos, so you can search for “sunset at the beach” or “my dog playing fetch” and find exactly what you’re looking for. It’s like having a photographic memory, but, you know, better.

But Gemini isn’t just about making search easier. It’s also a creative powerhouse. In Google Workspace, you can use Gemini to summarize long meetings, write emails in different styles, or even generate poems. It’s like having a personal writing assistant who can do anything you ask.

And for those of you glued to your Android phones (guilty!), Gemini is coming for you too. They’re working on new AI assistants that can do everything from booking flights to managing your calendar. Imagine having a personal assistant who can take care of all the boring stuff, freeing you up to focus on the things that matter.

But here’s where things get really interesting. Google is also rolling out a bunch of Gemini-powered extensions for some of your favorite apps. Planning a trip? The Google Flights and Google Hotels extensions will help you find the best deals and plan your itinerary like a pro. Google Maps will give you personalized recommendations based on your location and interests. And even YouTube is getting in on the action, with Gemini helping you find and understand videos like never before.

Now, I know what you’re thinking: “Extensions? Other AIs have those too.” And you’re right, they do. But what makes these extensions special is that they’re built right into the Google apps you already use every day. It’s a seamless experience that just makes sense.

Oh, and did I mention Magic Compose? It’s like having a Shakespeare writer for your text messages. Literally, you can choose from different styles like “Shakespeare,” “Chill,” or “Lyrical” to spice up your texts. It might not sound like a big deal, but it’s these little touches that make Gemini feel like a gamechanger.

Alright devs, listen up! This one’s for you. Google’s not just stopping at Gemini-powered apps for the average Joe. They’re giving you the keys to the kingdom with Gemini 1.0 Pro, 1.5 Flash, and 1.5 Pro available on Google AI Studio and you can also get up to 2M tokens now. These are the tools that will let you build your own AI-powered apps and experiences, so let those creative juices flow!

Now, I know this all sounds a bit ‘futuristic’, any way you’d like to call it, but trust me, this is happening. Google is all about making AI work for everyone, not just the techies. They’re working with schools to help students learn better with AI, and they’re making sure it’s all done ethically and responsibly.

So yeah, the Google I/O keynote was a bit of a whirlwind. But the takeaway? AI is here to stay, and it’s about to make your life a whole lot easier. Whether you’re searching the web, managing your photos, or just trying to get through your day, Gemini and its AI buddies are here to help. And that, my friends, is pretty darn exciting.