• Practicaly AI
  • Posts
  • AI Is Deciding Your Coffee Now (And Sliding Into Everything Else)

AI Is Deciding Your Coffee Now (And Sliding Into Everything Else)

Today in AI: ChatGPT enters Starbucks, Gemini lands on desktop, and Adobe brings AI deeper into creative workflows.

đź‘‹ Hello hello,

Feels like AI is quietly moving out of browser tabs and into the tools we actually use all day. Your coffee app, your desktop, your design software… it’s all getting an upgrade.

And the pattern is getting clearer. The winners won’t just be the smartest models, they’ll be the ones that show up exactly where you’re already working. Let’s get into it.

💬 Quick note: We’re building something to help teams actually get good at AI (not just use it). → Get early access here

 đź”Ąđź”Ąđź”Ą Three Highly Curated AI Stories

Your next latte order might start with a conversation, not a menu. Starbucks is quietly testing a ChatGPT integration that lets you describe what you're in the mood for — "something warm but not too sweet, I didn't sleep well" — and get a drink recommendation back.

A sample prompt response in ChatGPT using Starbucks’ beta app. (Source: Starbucks)

This is interesting because it shifts AI from being a “tool you open” to something embedded inside everyday apps. You’re not going to ChatGPT anymore, it’s coming to you inside products you already use.

To be fair, most people ordering coffee already know what they want. This won't replace the regulars tapping their usual in three seconds. But for the long tail of "I don't know, surprise me" customers — that's a huge slice of revenue most brands currently lose to decision fatigue.

Watch which brand does this next. Once one major chain proves conversational discovery converts, the rest of the category has about eighteen months before it becomes table stakes.

 The browser just lost its monopoly on AI. Google just launched a standalone Gemini app for Mac this week, and with ChatGPT and Claude already there, we can officially call it: the AI assistant has moved out of the tab and onto the OS.

Here's why that small UX shift matters more than it sounds: every time you had to copy something out of your work, paste it into a browser, and paste the answer back, you were paying a tax. That tax made AI feel like a detour from your actual job. A desktop assistant with screen context collapses the detour into a keystroke. The work stays where the work lives.

The counterpoint is real — most people don't change their habits just because a keyboard shortcut exists. Plenty of users will keep pasting into chat.anthropic.com or chat.openai.com out of muscle memory. But the ceiling on what's possible just got higher, and power users will notice first.

Are you using AI more in desktop apps or still in the browser?

Login or Subscribe to participate in polls.

Adobe is introducing a Firefly assistant powered by Anthropic’s Claude, bringing AI directly into its creative ecosystem.

Picture this: you open Premiere Pro for the first time, stare at the twelve-panel interface that has broken the spirit of a generation of creators, and instead of Googling "how to color grade a clip," you just ask. The assistant walks you through it inside the app — not a tutorial video, not a forum thread, an actual guide using your footage. The feature panel stops being a maze.

This isn’t just about generating images. It’s about helping users navigate complex workflows inside Adobe tools. Think assistance layered into the process, not separate from it. Now, the honest part: taste is harder to teach than software, and it's the thing that actually separates good work from generated-looking slop. The tools getting easier doesn't make more people good — it makes more people capable. Those are different.

🔥🔥 Two Things to Do With AI

If you’re on Google Workspace, there’s a feature hiding in plain sight inside Gmail called “Studio.” It lets you create simple automations or use ready-made templates for common workflows.

You can set up things like email summaries, meeting prep, or even keyword alerts. One useful example is a notifier that pings you whenever a specific word shows up in your inbox, like “feedback” or “urgent.”

It’s simple to set up and surprisingly powerful for something built right into Gmail. If your inbox feels chaotic, this is worth exploring.

Microsoft is expanding Copilot inside Word with new capabilities designed to improve how you work with documents.

The focus is on making writing and editing more dynamic. Instead of static documents, you get assistance throughout the process, helping you refine, restructure, and move faster. If you spend a lot of time in Word, this is another step toward AI becoming a built-in collaborator rather than a separate tool.

Is this you? Your team is using AI. But they’re not getting better results.
We’re fixing that. Join the waitlist to find out how.

🔥 1 Pro AI Tip To Try Today

Most people use LLMs like a fresh start every time. New chat, new context, same instructions repeated again and again.

But there’s a better way. You can give LLMs like Claude a memory system using simple markdown files so it actually understands your preferences, your style, and your workflows.

1. Create a file called “CLAUDE.md” and write clear instructions about how you want responses structured.
2. Add supporting files like “about-me.md” or “voice-and-style.md” with more context.
3. Upload or reference these files inside your Claude project.
4. Use them consistently so Claude starts adapting to your patterns.
5. Update the files over time as your needs evolve.

You stop repeating yourself and get more consistent, higher-quality outputs.

We break it down in a detailed guide with steps here.

💬 Quick poll: What’s one AI feature you tried recently that genuinely surprised you?

Did you learn something new?

Login or Subscribe to participate in polls.

Until next time,
Team @PracticalyAI

Reply

or to participate.