Model Mayhem and Messaging Mutiny

Meta’s Next Leap, WhatsApp’s AI Snafu, and the Star Trek Future We’re Not Ready For (Yet)

In partnership with

Welcome Back!

There’s a lot to catch up on already. Meta’s Llama 4 shows up to the frontier with ambition (and a little swagger), a courtroom gets spicy thanks to an AI avatar defense, and people are trying to delete WhatsApp’s AI assistant like it’s Clippy in a trench coat. Meanwhile, a researcher is on a mission to humble our AI overlords, and top computer scientists are starting to sound like they binged Star Trek: The Next Generation over the weekend. Let’s unpack.

You’ve heard the hype. It’s time for results.

After two years of siloed experiments, proofs of concept that fail to scale, and disappointing ROI, most enterprises are stuck. AI isn't transforming their organizations — it’s adding complexity, friction, and frustration.

But Writer customers are seeing positive impact across their companies. Our end-to-end approach is delivering adoption and ROI at scale. Now, we’re applying that same platform and technology to build agentic AI that actually works for every enterprise.

This isn’t just another hype train that overpromises and underdelivers.
It’s the AI you’ve been waiting for — and it’s going to change the way enterprises operate. Be among the first to see end-to-end agentic AI in action. Join us for a live product release on April 10 at 2pm ET (11am PT).

Can't make it live? No worries — register anyway and we'll send you the recording!

Meta Drops Llama 4—and It’s Not Just a Baby Alpaca

Meta just launched Llama 4, a new set of flagship open-weight models that include both base and instruction-tuned variants. Built in partnership with Microsoft (yes, you read that right), Llama 4 will run on Azure and power Microsoft Copilot and Meta’s own AI assistant.

Why enterprises should care:

  • Open-weight models mean you can fine-tune without giving Meta your IP—or your soul.

  • Expect big shifts in model sourcing strategy: Llama 4 rivals OpenAI and Claude on benchmarks, so vendor lock-in just became less necessary.

  • Microsoft now co-supplies two top-tier models (OpenAI and Meta), giving Copilot an edge as a multi-model AI interface.

Takeaway: If you’re planning your internal AI roadmap, Llama 4 might offer the agility you need without the enterprise tax.

AI Avatar Walks into a Courtroom…Gets Annihilated

Jerome Dewald tried to use an AI avatar to argue a case in federal court. The judges were not amused. The bot, created using OpenAI’s GPT, delivered opening arguments that tanked so hard it triggered a formal scolding and a permanent suspension from further filings in that court.

Why this matters:

  • This wasn’t a low-stakes stunt. It was a real legal case—tanked by hallucinated facts and a wildly overconfident avatar.

  • Courts are now setting precedent: AI isn’t a substitute for real counsel, and procedural norms still matter.

Takeaway: This is your reminder that regulatory readiness isn’t just a compliance box. It’s a survival skill.

WhatsApp’s AI Assistant: Opt-In? Try Opt-Never

Meta’s new AI assistant inside WhatsApp is here—and can’t be deleted. It shows up in search, integrates with chats, and runs on—you guessed it—Llama 3. But the backlash is real. Users are calling it invasive, non-consensual, and eerily persistent.

Enterprise implications:

  • Consumer trust erosion is a reputational contagion. If Meta doesn’t course-correct, it could bleed into workplace adoption of Meta’s AI tools.

  • Privacy-first vendors just got a new marketing angle—“We won’t force-feed you AI.”

Takeaway: The WhatsApp AI rollout is a masterclass in how not to do UX with emerging tech.

François Chollet Is Out to Prove AI Is Still Kinda Dumb

The man behind Keras has a new target: AI hype. His company, Arc, is building a benchmark that pushes AI models to think abstractly—like humans. Spoiler: they’re not doing great. Most large models fail at tasks a fifth grader could crush.

Why it matters:

  • We’ve built excellent autocomplete machines—not real thinkers.

  • Enterprise leaders need benchmarks that go beyond token prediction and get into reasoning and robustness.

Takeaway: Before your team rewires workflows around LLMs, ask what your model can actually reason about—and test it like it’s applying for a job.

The Star Trek Dream Gets a Formal Prediction

A group of top computer scientists, including Google DeepMind veterans, just went full Trekkie: they predict AI will soon unlock “post-scarcity economics.” Think: replicators, assistants, zero marginal cost productivity. The wildcard = Governance.

Why you should stay dialed in:

  • AI will accelerate productivity, yes—but enterprise success will depend on access, control, and implementation.

  • Who owns the “replicator” matters. The value capture equation is shifting from creation to coordination.

Takeaway: Your AI future is about architecting ecosystems that make them work, responsibly and equitably.

TL;DR:

  • Meta’s Llama 4 makes open-weight models sexy AND competitive.

  • AI in courtrooms = not ready for prime time (or the judges).

  • WhatsApp’s AI assistant is opt-out in theory, opt-in in practice. Expect backlash.

  • François Chollet wants to humble the hype; your model should pass his test, not just OpenAI’s.

  • Star Trek AI future? Sure—but only if you control the replicator, not just the script.

AI may be speeding toward a sci-fi future, but enterprise execution still lives in today’s risk matrix. Pilot smart, benchmark often, and question the default vendor narrative.

Stay sharp,


Cat Valverde
Founder, Enterprise AI Solutions
Navigating Tomorrow's Tech Landscape Together

Your Feedback = Our Fuel

How was today’s newsletter?

Login or Subscribe to participate in polls.