š Hello hello,
Nvidia just dropped $20 billion on the company that made their chips look slow.
Meanwhile, ChatGPT learned a new trick called "Skills," and OpenAI is teaching robots to pipette. Yes, actual wet lab robots.
Let's get into it.
š„š„š„ Three big updates
Remember Groq? The inference chip company whose demos made everyone ask, āWait, why is this so much faster than everything else? Nvidia noticed too. It just agreed to a roughly $20 billion package to license Groqās inference technology, acquire key chip and IP assets, and bring over much of Groqās leadership team, in what is now Nvidiaās largest deal to date.ā
Hereās the thing: Groqās LPU-style architecture is purpose-built for high-throughput inference ā the part of AI that matters when you are actually using models, not just training them. Nvidiaās GPUs still dominate training, but inference at scale is where a huge share of long-term revenue and cost efficiency will be decided.
Translation: Nvidia saw a strategically important rival in inference and wrote a very large check to pull Groqās tech and talent closer, via a non-exclusive licensing and asset deal rather than a full corporate acquisition.
OpenAI launched "Skills" ā reusable instructions you can attach to ChatGPT so it stops forgetting your preferences.
Think: "Always format my code in Python 3.11 style" or "When I ask for emails, make them short and slightly funny." You save it once, apply it whenever.
It's basically custom instructions on steroids. Instead of one mega-prompt trying to capture everything about you, you build a library of specialized behaviors and mix and match. For developers, this is huge. For everyone else, it's the difference between training a new intern for every conversation and having an assistant who actually remembers you hate bullet points.
OpenAI published research on AI agents that can conduct real biological experiments ā including pipetting, measuring, and the entire wet lab workflow.
The system combines reasoning models with physical lab automation. You describe an experiment in plain language, and it figures out the protocol, controls the equipment, and executes it.
Early days, but the implications are wild. Biology's biggest bottleneck isn't ideas ā it's the time-consuming, repetitive lab work that burns out PhDs. If AI can handle the execution, human researchers can focus on asking better questions. We're officially past "AI writes code" and into "AI does science."
š„š„ Two tools worth trying
A curated collection of prompts specifically for vibe coding projects. Skip the "write me a Python script" basics ā these are designed for building actual apps with AI assistance. Bookmark it.
LLM Council ā multi-LLM consensus answer tool
LLM Council is an open-source project that lets you ask the same question to multiple language models at once and combine their answers into one. Instead of depending on a single modelās response, it sends your prompt to several LLMs in parallel, has each model review and rank the othersā outputs, and then uses a designated āchairmanā model to synthesize a final answer that reflects the best reasoning from the group. Think of it like a panel of experts debating before delivering a consensus reply.
š„ Things You Didnāt Know You Can Do With AI
Turn raw data into an interactive presentation. Hereās a quick 3-step setup that transforms static spreadsheets into a dynamic, shareable deck:
1ļøā£ Upload your files ā Drop your PDF and CSV/Excel into ChatGPT.
2ļøā£ Ask for analysis ā Tell it what to look for (trends, insights, summaries).
3ļøā£ Connect Replit ā Have ChatGPT generate an interactive presentation that visualizes the data ā complete with charts, filters, and captions.
The result: a live, data-driven presentation you can edit and share instantly ā no coding, no manual chart work.
Do you like this new format?
š¬ Quick poll: What's the AI tool you use daily that nobody talks about?
Hit reply ā We're always hunting for underrated gems.
Until next time,
Kushank @DigitalSamaritan
