Meta will deploy tens of millions of Amazon's Graviton5 processors to run agentic AI workloads, marking one of the largest custom silicon commitments outside GPU clusters. The deal signals a broader shift: inference-heavy agent workflows need different economics than training, and CPU-class chips can handle the orchestration layer at a fraction of GPU cost. Amazon gets a flagship customer for Graviton. Meta gets compute diversity beyond its Nvidia supply chain.
Google Puts $40B Into Anthropic. X Money Nears Launch.
Musk-Altman trial opens. Shopify bets on juniors. A 23-year-old cracks a 60-year maths conjecture.
Google is investing $10 billion in Anthropic at a $350 billion valuation, with $30 billion more contingent on performance milestones. The deal arrives days after Amazon committed $25 billion, meaning Anthropic has locked in $65 billion from two competing cloud giants in a single month. Both investments come with compute access tied to each provider’s hardware, giving Anthropic leverage few startups have ever held.
Related Digital has locked in $16 billion from Blackstone and PIMCO to build a 1-gigawatt data centre campus in Saline, Michigan. The facility will power Oracle's AI business and sits within the Stargate project, the $500 billion initiative led by Oracle, OpenAI, and SoftBank. Michigan's governor called it the largest investment in state history. Local residents are protesting potential grid strain and pollution from the 250-acre site.
Liam Price, 23, cracked an Erdős conjecture that had stumped professional mathematicians for six decades. He did it with a single prompt to GPT-5.4 Pro. What separates this from previous AI maths results is the method: the model used a completely novel proof technique that experts believe could have broader applications. Scientific American reports the approach, not just the answer, is what has mathematicians paying attention.
The trial over OpenAI's future begins this week, pitting Musk's claim that Altman betrayed the nonprofit's founding mission against OpenAI's position that Musk wanted control all along. Musk dropped his fraud claims days before proceedings, narrowing the legal battle to the conversion question. At stake is whether OpenAI can restructure as a for-profit valued at $300 billion, a move that would reshape how the industry thinks about AI governance.
Anthropic recruited 69 employees, gave each a $100 budget, and let Claude agents negotiate and close deals on their behalf with no human intervention. Across four parallel Slack-based marketplaces, agents completed 186 transactions worth over $4,000. Opus agents fetched $3.64 more per item than Haiku, but participants could not tell which model represented them. One agent arranged a doggy playdate between two employees.
More than three years after acquiring Twitter, Musk is close to launching X Money, a banking and payments platform inside the app. Bloomberg reports early testers are seeing 3% cash back, a 6% savings rate (15 times the US national average), and an X-branded metal debit card stamped with the user's @ handle. The service still needs authorisation across all 50 states before a full public rollout.
Built on Andrej Karpathy's technical lecture, this browser-based guide lets you click through every stage of LLM training. It covers web crawl data collection, tokenisation and BPE vocabulary building, transformer pretraining with billions of parameters, and RLHF alignment. Each stage includes interactive diagrams with representative numbers from frontier models. It is the clearest single-page explanation of how these models actually work that exists on the web.
A solo developer built a CNCF dashboard with AI agents running in parallel terminal sessions. After the initial honeymoon, builds broke in untraceable ways and architectural choices got silently overwritten. Four months later, the project has 63 CI/CD workflows, 91% test coverage, and agents resolving community bugs in 30 minutes. The key insight: agent leverage comes from the measurement loops the codebase wraps around the model, not from the model itself.
Swizec rebuilt a production invoicing system where eight figures of revenue depend on the code working correctly. Cursor stats confirm 97% AI-generated output. The unlock was not watching the agent work: message it on Slack, move on, wait to hear back. The promised 10x only materialises when you stop micromanaging the agent and treat it like a colleague who works asynchronously.
Using Vei, the author set up a simulated service company with five named agents and ran an outage scenario. Each agent behaved honestly and within its role constraints. The organisation they formed did not. Emergent failures appeared that no individual agent caused: information hoarding, conflicting priorities, and coordination breakdowns mirroring human organisations. The conclusion is that multi-agent alignment requires organisational design, not just model alignment.
The author draws a line from manufacturing offshoring to what is happening with software. The same economic logic that moved factories overseas now applies to codebases: if AI can handle the repetitive work, the institutional knowledge atrophies alongside it. When the underlying competence leaves, it does not come back just because you need it to. The comparison is not subtle, but the historical pattern makes the argument harder to dismiss.
In the span of 24 hours last week, OpenAI doubled its prices with GPT-5.5 while DeepSeek released V4 at a fraction of the cost. The smooth price-performance curve that let teams pick a sensible middle-tier model has split into two clusters with a widening gap. For anyone building agents or high-volume inference pipelines, the routing decision just got harder. The era of good-enough defaults is ending.
A design engineer writes about watching AI hollow out the parts of the job that made it worth doing. The tools have improved, the output has increased, but the craft that drew them into the field is shrinking. It is a personal piece about identity loss in a profession that keeps redefining what it values. If you have felt the gap between shipping more and caring less, this will resonate.
Tanay Sai catalogues the scaffolding developers build around model limitations and tracks how quickly it becomes obsolete. PDF chat systems that once needed chunking, embeddings, and vector stores now fit inside a single long-context call. Structured output formatting that required regex parsers is now an API parameter. The pattern holds for multi-agent frameworks, browser scripting layers, and voice pipelines. The advice: build your harnesses cheap enough to throw away.
Peter Szász maps agentic AI patterns onto the three pillars of engineering management. Execution is the safest bet: autonomous PR triage, backlog grooming, and documentation upkeep. Team dynamics sit in a middle tier, with virtual representatives handling routine cross-team requests. Personal development carries the most risk, where coaching agents track achievements for self-evaluations. The EM role shifts from operational task management toward clarity of purpose, relationships, and organisational influence.
While the prevailing assumption is that AI eliminates junior engineering roles, Shopify expanded its internship programme from 100 to over 1,000. VP Engineering Farhan Thawar says interns who grew up alongside AI tools reimagine what it looks like to build. The four-month programme doubles as mutual assessment, with Shopify watching how candidates use AI under real conditions. Knowing when to lean on the tools and when to think deeper is the new hiring filter.
The New York Times profiles Dwarkesh Patel, whose podcast averages two million listens per episode and draws guests from Satya Nadella to Ilya Sutskever. Patel spends two weeks preparing for each interview using flash cards, question trees, and hired tutors. He sublets office space from Leopold Aschenbrenner and lives with an Anthropic researcher. His scepticism about continual learning shifted how AI labs discuss the problem publicly.
This tutorial-style project replaces Notion-based workflows with Google Cloud and Workspace APIs as the orchestration layer. Voice input flows through Google Cloud Speech-to-Text, Gemini extracts intent via function calling, and Google Sheets acts as a dynamic schema registry. Five hands-on tutorials cover building a real-time voice agent with ADK, email triage pipelines, and cross-ecosystem intelligence. Most useful for teams already deep in Google Workspace looking to add voice-native automation.
Snyk's research team analysed nearly 4,000 agent skills across major marketplaces and found credential theft, backdoor installation, and data exfiltration buried in publicly available packages. They released Agent Scan's Skill Inspector as a free tool to check skills before installation. The CLI auto-discovers agents and skills across Claude Code, Cursor, Gemini CLI, and Windsurf. 91% of confirmed malicious skills use prompt injection as the attack vector.
Package Manager Guard wraps npm, pip, and other package managers to check every install against real-time threat intelligence before code executes. It runs installations inside OS-native sandboxes on macOS and Linux, preventing install scripts from modifying the system even if a threat slips past detection. Setup takes one command. Every installation event gets logged with a full audit trail. Built for both developers and AI coding agents that install packages autonomously.
The TypeScript educator published the agent skills from his personal .claude directory and the repo crossed 23,000 GitHub stars within days. Skills cover practical engineering workflows across Claude Code and Codex. The project reflects a growing pattern where experienced developers share their agent configurations as reusable packages, turning personal automation into community infrastructure. If you use Claude Code, this is the fastest way to see what a well-tuned setup looks like.
Built by Steve Yegge's team and written in Go, Beads adds structured memory to coding agents so they retain context between sessions. The project has passed 21,000 GitHub stars and supports Claude Code, Cursor, and other agent harnesses. Memory entries are stored as markdown files that agents can read and write during work. It solves the problem of agents losing everything they learned the moment you close the terminal.
Obscura replaces headless Chrome for web scraping and agent automation at a fraction of the resource cost. It runs real JavaScript via V8, supports Chrome DevTools Protocol, and works with Puppeteer and Playwright out of the box. Memory footprint sits at 30 MB versus Chrome's 200 MB or more. Stealth mode includes anti-fingerprinting and tracker blocking. Ships as a single binary with no external dependencies.