š Hello hello,
NVIDIA just dropped a free, open-source real-time voice model, which is basically an invitation to build your own conversational AI without paying a closed API tax.
Meanwhile, Google is laying out its vision for āAI Modeā in Search ā a shift toward personal intelligence that makes Search feel less like a website and more like something that understands what youāre trying to get done.
And if you thought AI was staying inside your apps, Apple may have other plans: reports say theyāre working on an AI wearable, which could push assistants off the screen and into your daily life.
š„š„š„ Three big updates
1. NVIDIA dropped a real-time conversational AI voice model (open-source + free)
NVIDIA just released a real-time conversational AI voice model thatās free and open source. Itās called PersonaPlex 7B, and you can grab it directly on Hugging Face. That alone is a big dealāreal-time voice is one of those āsounds simple, is actually hardā categories.
Open-source voice models mean builders donāt have to rely only on closed APIs to ship voice assistants. More experimentation. Lower cost. Faster iteration. And a lot more weird (but fun) voice apps are coming soon.
Google published an update on where Search is headed next: a more personal, AI-powered experience. The core idea is āAI Modeā ā where Search becomes less about ten blue links and more about understanding what youāre trying to do, then helping you do it.
This is Google saying the quiet part out loud. Search is turning into an assistant layer, not just a discovery layer. That changes how people find answers, make decisions, and even how creators get visibility.
Apple is reportedly working on an AI wearable. Details are still early, but the direction is clear: AI is moving off the screen and into something you wear. Thatās a whole different category of usefulnessāand privacy expectations.
A wearable changes behavior. Itās always there, always accessible, and can become the default way people interact with assistants without opening an app. If Apple enters this space seriously, itās going to pressure everyone else to rethink what āAI productā even looks like.
š„š„ Two Tools Worth Trying
1. šØ Krea Realtime Edit (beta)
Krea introduced Realtime Edit, which enables you to edit images with complex instructions in real-time. If you liked the āNano Bananaā style of instant visual editing, this is the same vibeāfast iterations, quick creative control, and less time stuck in prompt purgatory. Best for creators, designers, and marketers who want to refine visuals live instead of running five separate generations.
2. š¬ VEED Dynamic Subtitles
VEED just launched Dynamic Subtitlesāaka viral-style AI captions in one click. If you post short-form content (Reels, Shorts, TikToks), you already know captions arenāt optional anymore. This eliminates the āI donāt want to edit captions for 40 minutesā problem. Best for creators and social teams shipping volume fast.
š„š„ Things You Didnāt Know You Can Do With AI
Simon Meyer built an AI film with a surprisingly simple workflowāstarting with character creation, then generating interview scenes with better audio quality using Ingredients.
1. Generate your main character image using Google DeepMind Nano Banana (expect lots of iterations).
2. Lock the character + environment until it feels consistent and believable.
3. Use Google DeepMind Veo 3.1 Ingredients (via Freepik and invideo) to create the interview clips.
4. Focus on the Ingredients mode to improve audio quality (less echo, less distortion).
5. Compile the scenes into the final film and polish timing like a real edit.
Did you learn something new?
š¬ Quick poll: Whatās one AI tool or workflow you use every week that most people would find super helpful?
Until next time,
Kushank @DigitalSamaritan
