Anthropic nears $10B+ funding; a human and agent build a browser; new multilingual scaling laws.
OpenAI and Anthropic are reportedly nearing massive new funding rounds, with SoftBank eyeing a $30 billion investment in OpenAI and Anthropic's latest round exceeding $10 billion. This signals sustained investor belief in the AI market, shaping future compute availability and platform roadmaps. Separately, new scaling laws for multilingual models dropped yesterday, and one team demonstrated building a browser from scratch with just a human and an agent.
Anthropic closed its latest funding round, securing between $10 billion and $15 billion. This investment pushes the company's valuation to $350 billion, building on last year's reported $10 billion in revenue.
SoftBank Group is reportedly in talks to invest up to $30 billion more in OpenAI, as the ChatGPT maker aims to raise $100 billion in new capital. If successful, this funding round could value OpenAI at up to $830 billion.
DeepSeek-R1's Mixture-of-Experts model achieved state-of-the-art performance in some key benchmarks on older GPUs a year ago. This challenged the industry's focus on massive GPU clusters, initially causing a $1 trillion market value dip as major players adjusted strategies. The AI market has since rebounded, with companies like Nvidia and Anthropic seeing significant valuation increases.
4
Gemini 3 Flash Plans Visual Actions with Code
Google's Gemini 3 Flash now uses "Agentic Vision," combining visual reasoning with Python code execution. This allows the model to plan step-by-step actions like zooming and manipulating images to analyze visual data. It then applies a Think, Act, Observe loop, grounding its answers in direct visual evidence, which can boost accuracy for tasks like image annotation and visual math.
Kimi's K2.5 is an open-source multimodal model featuring a self-directed agent swarm. It orchestrates up to 100 sub-agents for parallel workflows, reducing execution time by up to 4.5x. K2.5 excels at front-end development, visual debugging, and reconstructing websites from video, and is available via Kimi.com, API, and the Kimi Code product.
Arcee AI pivoted to focus on open models built in the U.S., releasing their flagship Trinity Large model (400B total, 13B active MoE). The company invested $20 million and six months, training on B300 Nvidia Blackwell machines using techniques like the Muon optimizer and DeepSeek V3's auxiliary-loss-free balancing.
An engineer and a single Codex CLI agent built a basic web browser renderer from scratch in just three days. This project generated 20,000 lines of Rust code that renders HTML+CSS without any external crate dependencies. This contrasts with expectations that such complex software development would require sophisticated multi-agent systems and millions of lines of code.
ATLAS details adaptive transfer scaling laws for massively multilingual language models, based on the largest public pre-training study across 400+ languages. It offers data-driven guidance for builders on mixing training data, scaling model size, and mitigating the 'curse of multilinguality' to build high-performing multilingual models.
Sentience developed a 'verification layer' for browser agents using a 3-model architecture (planner, executor, verifier). This system gates each step with explicit assertions over structured snapshots, allowing smaller local models for execution. An Amazon shopping case study showed improved token efficiency and successful task completion, proving deterministic verification delivers reliability.
One analysis argues that human-readable code might become an unnecessary intermediary in software development. The author posits that AI agents could eventually bypass code altogether, directly generating machine code from natural language inputs.
Upcoming earnings from Microsoft, Meta, Amazon, and Alphabet will "shadow-price" Nvidia's performance, analysts say. Investors will scrutinize capex projections for signals of continued AI infrastructure spending or emerging "discipline." These reports collectively set the stage for Nvidia's earnings, shaping market sentiment on the continued expansion and financing of AI infrastructure.
QCon AI experts argue agentic AI won't eliminate continuous integration, but it will fundamentally change the software development lifecycle. AI-generated code creates bottlenecks in pull request reviews due to sheer volume and technical debt. CircleCI's Michael Webster points out the linear build-test-deploy model breaks down, requiring a shift to more nimble testing. Proposed approaches include test impact analysis, selective code review, and less linear build systems where agents handle more autonomous validation.