A synthesis of 7 perspectives on AI, machine learning, models release, models benchmarks, trending AI products
AI-Generated Episode
From lifelike digital avatars to political fights over data centers, AI’s future is being built in public — and not everyone is happy about how it’s unfolding.
Digital avatars have long been stuck in the uncanny valley: stiff, slightly creepy, and useful mostly for demos rather than real products. Lemon Slice wants to change that.
The startup has raised $10.5 million from Matrix Partners, Y Combinator, and notable angels like Dropbox co-founder Arash Ferdowsi and Twitch CEO Emmett Shear to build a new generation of video-first AI agents. Its 20-billion-parameter model, Lemon Slice-2, can turn a single image into a streaming avatar running at 20 frames per second on a single GPU, delivered via API or a simple embeddable widget.
Unlike earlier avatar tools that focused on narrow verticals, Lemon Slice is going for a general-purpose diffusion-based video model. That lets it power anything from customer service reps and language tutors to mental health companions, with both human and non-human characters. ElevenLabs provides the voices; Lemon Slice handles the faces and motion.
Backers argue this approach can eventually “break the avatar Turing test” and finally overcome the uncanny valley by scaling data and compute, not just crafting niche, bespoke systems. If they’re right, we may soon see video-native AI agents embedded across education, e-commerce, and corporate training.
On the hardware side, AI’s arms race just escalated. Nvidia has entered a non-exclusive licensing agreement with Groq, the AI chip startup known for its LPU (language processing unit) architecture, and will hire Groq founder Jonathan Ross and other key leaders.
CNBC reports Nvidia is spending around $20 billion on Groq assets, a figure Nvidia hasn’t confirmed as a full acquisition but which would mark its largest deal to date. Groq has claimed its LPUs can run large language models up to 10x faster and at a tenth of the energy of conventional GPUs, and the company recently powered apps for over 2 million developers.
In practical terms, this move tightens Nvidia’s grip on the AI compute stack just as demand explodes. Whether framed as a licensing partnership or quasi-acquisition, it underscores how critical specialized inference hardware has become — and how hard it will be for rivals to carve out independent footholds in the AI chip market.
All that compute has to run somewhere, and in 2025 the data centers behind AI went from invisible infrastructure to public enemy — or critical industry, depending on who you ask.
Construction spending on U.S. data centers has surged 331% since 2021, with hundreds of billions pouring into new server farms. Tech giants like Google, Meta, Microsoft, and Amazon are driving a massive buildout, amplified by state-backed mega-projects like the Stargate initiative.
Communities are pushing back. Data Center Watch counts 142 activist groups across 24 states organizing against new facilities, citing environmental risks, local health concerns, and above all skyrocketing electricity bills. Protests in Michigan, Wisconsin, Tennessee, and California have stalled or blocked projects; the group estimates about $64 billion in developments have been delayed or canceled due to grassroots opposition.
Politicians are paying attention. Rising utility costs tied to AI and data centers are expected to be a defining issue in the 2026 U.S. midterms. Meanwhile, tech companies are mounting their own PR counteroffensive, funding trade groups and messaging campaigns to sell voters on jobs and economic growth.
The message is clear: as AI infrastructure scales, its social license is no longer guaranteed.
On the consumer side, AI assistants are rapidly weaving into everyday experiences.
Waymo is quietly testing an in-car Gemini assistant discovered in its app code by researcher Jane Manchun Wong. The “Waymo Ride Assistant” is designed as a friendly, succinct companion that can answer questions, tweak in-cabin settings like temperature and music, and reassure anxious riders — while carefully avoiding commentary on real-time driving decisions or safety incidents. It’s a tightly scoped, context-aware bot meant to make robotaxis feel less alien and more approachable.
At home, Amazon is expanding Alexa+ as an app-like platform. New integrations with Angi, Expedia, Square, and Yelp will let users book hotels, schedule home services, and make salon appointments via natural-language chat. Alexa+ already ties into services like Uber, OpenTable, Ticketmaster, and Thumbtack. The big open question is behavior change: will users actually shift from tapping apps to delegating tasks to an AI, and will the assistant feel helpful rather than like an ad-driven recommendation engine?
Even messaging platforms are in play. In Europe, Italy’s competition authority has ordered Meta to suspend a WhatsApp policy that bans rival general-purpose AI chatbots from using its business API, calling it a potential abuse of dominance that could harm competition. The European Commission is investigating as well. Meta argues WhatsApp isn’t an “app store” for AI and says it will appeal.
Together, these moves highlight a broader shift: AI assistants are no longer standalone apps; they’re becoming infrastructural layers — and regulators are beginning to treat them that way.
Finally, the long-running copyright fight over AI training data took another turn. Author John Carreyrou and a group of writers have filed a new lawsuit against Anthropic, Google, OpenAI, Meta, xAI, and Perplexity, alleging the companies trained on pirated copies of their books.
A previous case against Anthropic produced a $1.5 billion settlement, with eligible authors getting around $3,000 each. But a judge also ruled that training on pirated books could be legal, even if obtaining those books was not. The new plaintiffs argue that this structure massively underprices “willful infringement” that underpins models generating billions in revenue and lets AI companies buy their way out of accountability at “bargain-basement rates.”
As models become more commercially central, this kind of targeted litigation — focused not just on access, but on the economic value of creative works — is likely to intensify.
This week’s stories trace the full AI stack: from avatar front-ends and in-car assistants to chip architectures and the data centers reshaping local politics, all the way down to the books and rights-holders feeding the models. The throughline is tension — between scale and sustainability, convenience and control, innovation and regulation. AI isn’t just a technical project anymore; it’s a social, economic, and political one, and the battle lines are only getting clearer.