Back to archive
Issue #29··22 min read·11 stories

SoftBank Eyes $30B OpenAI Investment

Anthropic nears $10B+ funding; a human and agent build a browser; new multilingual scaling laws.

OpenAI and Anthropic are reportedly nearing massive new funding rounds, with SoftBank eyeing a $30 billion investment in OpenAI and Anthropic's latest round exceeding $10 billion. This signals sustained investor belief in the AI market, shaping future compute availability and platform roadmaps. Separately, new scaling laws for multilingual models dropped yesterday, and one team demonstrated building a browser from scratch with just a human and an agent.

NEWS
5 stories
3

DeepSeek MoE Model: From $1T Market Dip to Soaring Valuations

DeepSeek-R1's Mixture-of-Experts model achieved state-of-the-art performance in some key benchmarks on older GPUs a year ago. This challenged the industry's focus on massive GPU clusters, initially causing a $1 trillion market value dip as major players adjusted strategies. The AI market has since rebounded, with companies like Nvidia and Anthropic seeing significant valuation increases.

4

Gemini 3 Flash Plans Visual Actions with Code

Google's Gemini 3 Flash now uses "Agentic Vision," combining visual reasoning with Python code execution. This allows the model to plan step-by-step actions like zooming and manipulating images to analyze visual data. It then applies a Think, Act, Observe loop, grounding its answers in direct visual evidence, which can boost accuracy for tasks like image annotation and visual math.

5

Open-Source Agent Swarm Orchestrates 100 Sub-Agents, 4.5x Faster

Kimi's K2.5 is an open-source multimodal model featuring a self-directed agent swarm. It orchestrates up to 100 sub-agents for parallel workflows, reducing execution time by up to 4.5x. K2.5 excels at front-end development, visual debugging, and reconstructing websites from video, and is available via Kimi.com, API, and the Kimi Code product.

TECHNICAL
4 stories
1

US-Built 400B MoE Open Model Ships from Arcee AI

Arcee AI pivoted to focus on open models built in the U.S., releasing their flagship Trinity Large model (400B total, 13B active MoE). The company invested $20 million and six months, training on B300 Nvidia Blackwell machines using techniques like the Muon optimizer and DeepSeek V3's auxiliary-loss-free balancing.

2

Single Agent Builds Browser Renderer in 3 Days

An engineer and a single Codex CLI agent built a basic web browser renderer from scratch in just three days. This project generated 20,000 lines of Rust code that renders HTML+CSS without any external crate dependencies. This contrasts with expectations that such complex software development would require sophisticated multi-agent systems and millions of lines of code.

3

New Scaling Laws for 400+ Language Models

ATLAS details adaptive transfer scaling laws for massively multilingual language models, based on the largest public pre-training study across 400+ languages. It offers data-driven guidance for builders on mixing training data, scaling model size, and mitigating the 'curse of multilinguality' to build high-performing multilingual models.

4

Browser Agents: Verification Layer Delivers Reliability

Sentience developed a 'verification layer' for browser agents using a 3-model architecture (planner, executor, verifier). This system gates each step with explicit assertions over structured snapshots, allowing smaller local models for execution. An Amazon shopping case study showed improved token efficiency and successful task completion, proving deterministic verification delivers reliability.

ANALYSIS
3 stories
1

AI Agents Could Make Code Obsolete, Infoworld Argues

One analysis argues that human-readable code might become an unnecessary intermediary in software development. The author posits that AI agents could eventually bypass code altogether, directly generating machine code from natural language inputs.

2

Big Tech Capex Reports Signal Nvidia's AI Outlook

Upcoming earnings from Microsoft, Meta, Amazon, and Alphabet will "shadow-price" Nvidia's performance, analysts say. Investors will scrutinize capex projections for signals of continued AI infrastructure spending or emerging "discipline." These reports collectively set the stage for Nvidia's earnings, shaping market sentiment on the continued expansion and financing of AI infrastructure.

3

QCon Chat: Agentic AI Bottlenecks CI/CD, Demands New Processes

QCon AI experts argue agentic AI won't eliminate continuous integration, but it will fundamentally change the software development lifecycle. AI-generated code creates bottlenecks in pull request reviews due to sheer volume and technical debt. CircleCI's Michael Webster points out the linear build-test-deploy model breaks down, requiring a shift to more nimble testing. Proposed approaches include test impact analysis, selective code review, and less linear build systems where agents handle more autonomous validation.