Back to archive
Issue #38··22 min read·11 stories

Anthropic Nears $20B Round; OpenAI Seeks $100B

MoE fine-tuning gets 12x faster, Waymo unveils its world model, and a new analysis explores 'write-only code'.

Anthropic is reportedly closing a $20 billion funding round, with OpenAI aiming for $100 billion, reflecting continued investor appetite for foundational models. Builders can now fine-tune MoE models 12x faster with a new technique that shipped yesterday, directly cutting training costs. Also, an analysis contrasts how experts build world models versus LLMs' word models.

NEWS
5 stories

Anthropic Nears $20B Funding at $350B Valuation

Anthropic reportedly secured $20 billion in new funding, pushing its valuation to $350 billion. This round, with backing from Nvidia and Microsoft, shows strong investor appetite for frontier AI labs and the rising cost of training large models. It provides context for the high capital demands in the AI race.

Read full story
2

Fine-Tune MoE Models 12x Faster with Unsloth Kernels

Unsloth released Triton kernels that speed up Mixture of Experts (MoE) LLM fine-tuning by up to 12x. They also cut VRAM usage over 35% and allow 6x longer context lengths without accuracy loss. These kernels work across MoE architectures and GPUs, including consumer hardware.

3

ChatGPT Growth Accelerates, OpenAI Nears $100B Round

Sam Altman told employees ChatGPT's growth is reaccelerating. OpenAI is reportedly closing in on a $100 billion funding round, which would make it one of the world's most valuable startups. This suggests continued investor confidence in large-scale AI platforms.

4

Claude Agents Automate Goldman Sachs Back Office

Goldman Sachs is using Anthropic's Claude to automate back-office functions like accounting and client vetting. The bank noted Claude's ability in complex, rules-based tasks beyond typical coding applications, aiming to constrain headcount.

5

Databricks AI Products Hit $1.4B Revenue, Raises $5B

Databricks closed a $5B funding round ($2B debt capacity) at a $134B valuation. The company reported $5.4B in annualized revenue, up 65% YoY, with AI products alone generating $1.4B. CEO Ali Ghodsi indicates a potential IPO when market conditions are optimal, signaling continued investor confidence in AI-driven data platforms.

TECHNICAL
3 stories
1

Waymo Trains Autonomous Cars in Simulated Extreme Scenarios

Waymo unveiled its World Model, built on Google DeepMind's Genie 3, to simulate rare driving events. It generates multi-sensor outputs like camera and lidar data for complex scenarios, including extreme weather. This allows Waymo to train its driver AI on situations difficult to find in real-world data, accelerating safety testing.

2

Yelp Shares RAG System Build for "Yelp Assistant"

Yelp detailed its RAG system for "Yelp Assistant," highlighting real-time data ingestion and separate pipelines for structured and unstructured content. They broke the inference pipeline into specialized models for retrieval, source selection, and keyword generation. This architecture reduced latency and improved quality for their production AI assistant.

3

Building Secure AI Agents with Hard Guardrails

This technical insight details building 'Agent One,' a secure autonomous AI agent using Claude and n8n as an OpenClaw alternative. It emphasizes managing agents through context, implementing hard security boundaries like Docker isolation and tool approval, and maintaining minimal contracts for coordination.

ANALYSIS
4 stories
1

Write-Only Code: AI Generates Unread Production Code

A new concept, "Write-Only Code," suggests AI will generate production code humans never read or review. As agents handle more complex tasks, engineers shift from writing code to designing systems and managing constraints. This implies a change in the software development lifecycle, prioritizing automated guarantees over manual inspection.

2

LLMs Shouldn't Be Treated as Compilers

One argument suggests LLMs, while capable of generating code from natural language, shouldn't replace compilers. The issue isn't hallucination but natural language's underspecification, leading LLMs to make implicit design choices. This risks developers becoming code consumers, shifting focus from precise specification to merely generating output.

3

Latent Space: LLMs Lack 'World Models'

LLMs are "word models" that predict tokens, not "world models" that simulate complex, adversarial environments. This analysis argues LLMs are exploitable because they don't model being modeled, unlike systems designed for strategic interaction.

4

Yegge: Anthropic's 'Hive Mind' Culture Fuels Innovation

Steve Yegge argues Anthropic is in a 'Golden Age' of innovation, similar to early Amazon or Google. He attributes this to a 'hive mind' culture prioritizing 'vibes' and high work volume over rigid processes, where excess work prevents internal politicking. Yegge suggests maintaining this environment is critical, as a shift to scarcity could end this innovative phase.