• Practicaly AI
  • Posts
  • 🧠 Gemini Gets Faster, Claude Gets Smarter, Meta Maps the Mind

🧠 Gemini Gets Faster, Claude Gets Smarter, Meta Maps the Mind

Today in AI: Meta built a model for brain responses, Google sped up live AI experiences, and Claude can now clean up your PRs while you’re away.

👋 Hello hello,

It’s almost the weekend, which means two things: catching up on what actually matters and skipping the noise.

Google is making AI feel faster and less awkward, Meta is doing deeply ambitious research on how the brain responds to sound and sight, and Claude is inching closer to being the coworker who fixes things before you even check back in. Let’s get into it.

Let’s get into it.

💬 Quick note: We’re building something to help teams actually get good at AI (not just use it). → Get early access here

 🔥🔥🔥 Three Highly Curated AI Stories

Google DeepMind showed off a browser experience powered by Gemini 3.1 Flash-Lite that generates websites in real time as you click, search, and navigate. Instead of loading a fixed page, the experience keeps building as you go.

That matters because it makes AI feel less like a one-shot tool and more like a responsive layer sitting on top of the web. You click, it reacts. You search, it rebuilds. That is a very different user experience from waiting around for a polished final answer.

Google also rolled out a major Gemini Live upgrade powered by Gemini 3.1 Flash Live. The pitch is simple: faster responses, fewer awkward pauses, longer conversations, and answers that adjust their tone and length depending on the moment.

Put together, both updates point in the same direction: AI is getting more fluid, more conversational, and much better at keeping up.

Meta introduced TRIBE v2, a foundation model trained to predict how the human brain responds to sights and sounds.

It builds on Meta’s earlier work and uses more than 500 hours of fMRI recordings from over 700 people. The goal is to create a kind of digital twin of neural activity that can make zero-shot predictions for new subjects, languages, and tasks.

This is one of those updates that sounds niche until you sit with it for a second. A model that can generalize brain-response patterns across people and inputs could become a serious research tool for neuroscience, language, perception, and cognition.

Meta also released a demo and a research paper, which makes this feel a lot more real than a vague “research is happening” announcement.

Claude Code now has cloud-based auto-fix for web and mobile sessions. It can follow pull requests, fix CI failures, and address comments remotely so your PR stays green.

The big appeal here is obvious: you can walk away and let it keep working. This is not just “help me code.” It is “watch this process and clean things up while I do something else.”

That shift matters. The more AI moves from answering questions to handling follow-through, the more useful it becomes in actual day-to-day work.

For developers especially, this is the kind of feature that could quietly become addictive.

🔥🔥 Two Pro AI Tools To Try Today

1.🌐 Omma

Omma helps you create interactive 3D landing pages and websites. It looks especially useful for people who want visually rich pages without building everything from scratch.

A nice bonus is that you can remix community templates, which makes it easier to start with something good instead of staring at a blank canvas and pretending that’s inspiring.

If your Claude chat gets long, your usage limit can disappear faster than you expect. The /compact command gives you a recap of the conversation so you can start a fresh chat with the summary and keep going.

It is a simple trick, but a useful one if you use Claude heavily for brainstorming, writing, coding, or research and do not want one giant chat eating your limits alive.

Is this you? Your team is using AI. But they’re not getting better results.
We’re fixing that. Join the waitlist to find out how.

🔥 Things You Should Know About AI

Andrej Karpathy is one of the most well-known AI researchers in the world. He’s led AI work at Tesla, was a founding member at OpenAI, and is known for breaking down complex systems into simple, usable thinking frameworks.

Most people collect prompts like screenshots they’ll never find again. A better move is building a small prompt library you can actually reuse for real work.

The seven prompts to save:

Did you learn something new?

Login or Subscribe to participate in polls.

Until next time,
Kushank @PracticalyAI

Reply

or to participate.