• Practicaly AI
  • Posts
  • ChatGPT Images 2.0, Robot Runners, Meta’s Data Grab

ChatGPT Images 2.0, Robot Runners, Meta’s Data Grab

Today in AI: Anthropic pushed deeper into creative work, HeyGen tucked a surprisingly useful website shortcut into Hyperframes, and Claude users are getting louder about token burn.

👋 Hello hello,

Images that are getting indistinguishable from reality, robots quietly outperforming humans in physical tasks, and companies getting… creative about where they source training data from.

💬 Quick note: We’re building something to help teams actually get good at AI (not just use it). → Get early access here

 🔥🔥🔥 Three Highly Curated AI Stories

We're not talking about "pretty good for AI." We're at the point where the differences between models come down to subtle lighting and texture choices, not obvious tells. The gap between AI-generated and photographer-shot is now close enough that most people, in most contexts, won't catch it.

Verification just became a full-time job. Every brand photo, every UGC post, every "candid" ad image — the baseline assumption of authenticity is gone. Content creation gets cheaper and faster; trust on the internet gets more expensive to maintain.

Check the comparison we did for yourself and see how long it takes you to spot the difference.

Which image model actually looks more real to you?

Login or Subscribe to participate in polls.

A humanoid robot in China just finished a half marathon. 50 minutes, 26 seconds. Not a lab demo — a real course, real distance, real conditions.

That's not a stunt. Running 13 miles requires sustained balance, terrain adaptation, energy management, and the ability to keep going when things get slightly unpredictable underfoot. These are exactly the things robots have historically fallen apart on. This one didn't.

The line that matters isn't "can robots move" — it's "can robots sustain." Factory automation already existed. What changes now is any industry built around humans doing physically demanding work over hours, not seconds: logistics warehouses, last-mile delivery, healthcare support, field operations.

It's still early for deployment at scale, and controlled marathon conditions aren't the same as a chaotic hospital floor. But the trajectory is no longer theoretical. File this alongside the last two years of robot dexterity milestones. The pattern is becoming hard to ignore.

Meta is recording every keystroke its employees type — and feeding it into AI training data.

The stated reason is reasonable on its face: to teach AI agents how humans actually use computers, you need real examples of humans actually using computers. So Meta turned its entire workforce into a live data collection operation. Every click, every draft, every correction — logged, labeled, learned from.

Here's the part worth sitting with: the AI industry burned through most of the public internet. Now it's moving inward — employee keystrokes, Slack archives, internal emails from acquired startups. The data frontier didn't disappear; it just relocated to places people assumed were private.

To be fair, employees consented and Meta has internal data policies. But individual consent doesn't answer the bigger question: if this training approach produces dramatically better AI agents, every competitor follows. That's not a prediction — that's how this industry works.

The open question isn't whether Meta does this. It's how long before "your work activity trains our AI" becomes a standard clause in every employment contract.

🔥🔥 Two Pro AI Tips

Out of the box, GPT-Image-2 still has that slightly “AI-looking” finish. But this workflow fixes that by using Claude’s visual analysis to generate hyper-detailed prompts based on real photos. You basically borrow the color grading, lighting, and aesthetic from a real image, convert it into a structured prompt, and feed that into ChatGPT. The result looks significantly more realistic and consistent across generations.

Look at this example from the creator:

This will be huge for creators, marketers, and anyone building product visuals or UGC-style content.

You can now take dense content (like technical blogs or research posts) and turn them into clean, visual diagrams using models like Qwen 3.6.

The key advantage here is clarity. Instead of dumping text into slides, you’re converting ideas into structured visuals that are easier to understand and share. And importantly, this avoids the overly stylized “AI look” that some other tools produce.

🔥 Things You Didn’t Know You Could Do With AI

You can now use GPT-Image-2 through tools like Agent-S to generate full slide designs that look like they were made by a designer.

Here’s how to do it:

  1. Go to Agent-S and prompt it with your use case (e.g., “Create a branded Spotify-style pitch deck”).

  2. Specify the design style, colors, and layout preferences in plain English.

  3. Let the tool generate visual slides using GPT-Image-2.

  4. Review outputs and iterate with follow-up prompts (e.g., “make this more minimal” or “adjust typography”).

  5. Export the slides and refine if needed in your preferred presentation tool.

💬 Quick poll: What’s one AI feature you tried recently that genuinely surprised you?

Did you learn something new?

Login or Subscribe to participate in polls.

Until next time,
Team @PracticalyAI

Reply

or to participate.