- Practicaly AI
- Posts
- đź§ Claude Auto Mode, Meta Hyperagents, Luma UNI-1
đź§ Claude Auto Mode, Meta Hyperagents, Luma UNI-1
Today in AI: Self-improving agents, image models that think, and AI making decisions for you
đź‘‹ Hello hello,
The AI updates keep rolling in.
One shows how image models are starting to reason, not just generate.
One shows how AI systems are beginning to improve themselves over time.
And one hands over decision-making to AI in real workflows.
Let’s get into it.
🔥🔥🔥 Three Highly Curated AI Stories
Luma AI just introduced UNI-1, a model that processes text and “thinks” while generating images.
Most image models follow instructions. UNI-1 is positioned differently. It reasons through the prompt as it generates, which leads to more structured and accurate outputs.
Early benchmarks suggest it’s competitive with models from Google and OpenAI. That matters because the gap between labs is shrinking, and improvements are now showing up in how models interpret intent, not just how pretty the image looks.
The direction is clear. Image generation is moving from “prompt → output” to something closer to “prompt → reasoning → output.”
Meta introduced Hyperagents, a system designed not just to solve tasks, but to improve the way it improves over time.
Previous approaches to self-improving AI relied on fixed processes. Hyperagents take it a step further. They can modify both how they perform tasks and how future improvements are generated.
This is what researchers are calling metacognitive self-modification. The system evolves both its behavior and its learning process.
Across domains like coding, robotics, and math evaluation, these agents showed continuous improvement and outperformed systems without this kind of self-improvement loop.
It’s early research, but the implication is big. If this works at scale, we’re looking at systems that don’t just get better. They get better at getting better.
Claude Code now has an auto mode that lets it make permission decisions on your behalf.
Instead of approving every file write or command, Claude evaluates each action using a classifier. Safe actions run automatically. Risky ones get blocked, and Claude tries a different path.
This reduces friction when working with Claude Code, especially for longer workflows. At the same time, Anthropic is clear that risk isn’t eliminated, which is why they recommend using it in isolated environments.
It’s available as a research preview on the Team plan, with Enterprise and API access rolling out soon.
The bigger shift here is subtle. You’re not just delegating tasks anymore. You’re delegating decisions.
🔥🔥 Two Pro AI Tools To Try Today
1.đź§ľ Paper Snapshot
Paper just launched Snapshot, a feature that lets you import your live website into a design canvas as editable layers.
Instead of working from screenshots, you start from your actual site with real HTML and CSS. That makes it much easier to iterate, tweak layouts, or redesign without rebuilding everything from scratch.
This is especially useful for designers and teams working on rapid iterations.
2. 🎨 Moda
Moda is a design platform built around collaboration between humans and AI agents.
It can pull your brand assets directly from your site, generate designs on a canvas, and let you edit every layer manually. You can also use it to create full brand systems or generate personalized assets like decks at scale.
It also integrates with other tools through APIs and MCP, so you can trigger designs from agents like Claude.
🔥 Things You Should Know About AI
He shared an example of a supply chain attack involving a popular Python package called litellm. This is a library widely used in AI projects, with millions of downloads every month.
Here’s what happened:
1. A malicious version of the package was briefly uploaded to PyPI (the Python package registry).
2. Anyone who ran pip install litellm during that window could have unknowingly installed compromised code.
3. That code was designed to extract sensitive data like API keys, SSH credentials, cloud access tokens, and more.
4. The risk didn’t stop there. Any project depending on litellm could also pull in the compromised version.
5. The issue was caught quickly, but only because the attack had a bug that caused systems to crash.
The takeaway is simple. Every dependency you install carries risk. Especially in AI workflows where tools are constantly being added.
Before installing anything:
• Check where it’s coming from
• Be cautious with dependencies|
• Avoid blindly installing packages in critical environments
đź’¬ Quick poll: Would you trust AI to make decisions for you (not just execute tasks)? |
💬 P.S. Your team has AI tools. They’re missing AI skills.
We fix that. → Join the waitlist here for priority access
Don't forget to rate today's postThis helps us put better content for you |
Until next time,
Kushank @PracticalyAI



Reply