👋 Hello hello,

Google had a very Google week.

At The Android Show, they unleashed Gemini Intelligence — the AI that can handle your groceries, tickets, widgets, and life across phone, watch, laptop, and car without you lifting a finger.

Also, Mira Murati’s Thinking Machines just dropped something spicy: an AI that can actually listen and respond at the same time — no more awkward robotic pauses. It’s early, but it’s a direct shot at how OpenAI, Google, and Anthropic have built voice AI so far.

Let’s dive in.

🔥🔥🔥 Three Curated AI Updates

Gemini Intelligence is coming to your phone, smartwatch, laptop, and car, positioned as a layer that runs across your entire device ecosystem so it can handle the tedious stuff while you stay in the moment.

Two features that stopped our scroll:

  • Automations lets Gemini handle background tasks on your behalf — reordering your weekly groceries, securing event tickets — without you initiating every step.

  • Create My Widget lets you build fully custom home screen widgets around whatever actually matters to you: live transit updates, niche trivia, specific stock tickers.

The framing Google is going with is deliberate: Gemini as infrastructure, not just an assistant you open and close. That's a different kind of product bet.

You know that annoying pause.? You speak. AI listens. Then replies. And sometimes, it doesn’t even listen. Thinking Machines Lab — founded by former OpenAI CTO Mira Murati — wants to fix that at the model level, not with patches and workarounds.

I think their best feature is the ‘audio interjection’ that lets the AI talk over you in real time — so it can shut you down mid-sentence the moment it decides you’re wrong.

Their new "interaction models" are trained to listen and respond simultaneously, the way humans actually do in conversation. Their first model, TML-Interaction-Small, clocks in at 0.40 seconds response time. Google's Gemini Live sits at 0.57 seconds. OpenAI's realtime model is at 1.18 seconds.

This is a research preview, not something you can use today. But the paper is a pointed statement about where they think the whole field is getting it wrong. If they can scale this, it changes the conversation.

AI has had a listening problem since day one. It took a woman to fix it. Draw your own conclusions.

Google also revealed Googlebook this week — their first laptop designed around Gemini Intelligence rather than adding AI features to an existing product after the fact.

The promise is deep integration with your Android phone and Gemini baked into the core of the machine. Google describes it as built for heavyweight performance, though specific hardware specs are still to come. What's clear is the intent: a computing device where Gemini is the operating principle, not a feature you toggle on.

It's arriving this fall, which puts it squarely in the back-to-school and holiday window. Timing that is not accidental.

🔥🔥 Things You Didn’t Know You Could do With AI

If you've ever wanted to redo a room but had no idea where to start or how much things actually cost, here’s what you should do:

  1. Take a clear photo of the room/space you want to redesign.

  2. Open ChatGPT and upload the photo. Ask it to redecorate the room in a style you like, within a budget you set, sourcing items from a specific store. For example: "Redecorate this room in a cozy Scandinavian style, budget $600, with items I can get from IKEA."

  3. ChatGPT generates a redesigned version of your room along with a full shopping list that matches the vision and stays within your budget.

  4. The open the Codex app, share the photo and the shopping list, and ask it to add all the items to your cart.

  5. Your shopping cart is filled with the exact items at the exact prices specified in your budget. That's it.

🔥 Ask the Founder

We put our Founder, Kushank, in the hot seat with some of the most asked questions within the community. Here’s how it went:

1. "I only have five hours to learn AI. Where do I start?"

Start with the work you're already doing. Explain what you do for a living to an AI, list the tools you use, and ask it directly: "What are the high-leverage ways I could be using AI in my work?" Then pick one LLM — like ChatGPT or Claude — connect it to your existing tools, and narrow your focus to tasks that are repeatable and don't require a lot of judgment to execute. Those are your best first candidates for automation.

One thing Kushank is clear about: AI bridges the skill gap, but only to a point. A trained videographer using AI will always produce better results than someone without that background. The tool accelerates you; it doesn't replace depth. Start where you're already strong.

2. "What have you completely handed off to AI, and what still absolutely needs you?"

Fully handed off: research. When a new model drops or a topic comes up, Kushank goes straight to AI — not search results, not blue links. He uses it to get up to speed quickly: what's the narrative, what are people saying, what do I need to know right now. Information gathering is on autopilot.

Still entirely his: the writing. He has a Claude project trained on his voice and style, and it gets close. He uses it to generate three different narrative directions before filming a video, then picks through the ideas. But the actual script, the voice, the perspective — that's still him, every time. AI is still the helpful passenger. He's still the one driving.

P.S. Got burning questions about AI? Reply to this email with your questions and we will answer them in our newsletter.

Don't forget to rate today's post

This helps us put better content for you

Login or Subscribe to participate

Until next time,
Team @PracticalyAI

Reply

Avatar

or to participate

Recommended for you