A federal judge in San Francisco granted Anthropic a preliminary injunction, barring the Trump administration from enforcing its directive to ban Claude from federal agencies. Judge Rita Lin called the blacklisting "classic illegal First Amendment retaliation" after Anthropic publicly questioned the DOD's position on autonomous weapons. The ruling pauses the Pentagon's national security designation but a final verdict could be months away.
Court Blocks Pentagon's Anthropic Ban + H100 Rentals Hit $2/hr
GitLab's founder went founder mode on his cancer, Apple distils Gemini, and chess grandmasters weaponise bad moves
U.S. pharma giant Eli Lilly reached a $2.75 billion agreement with Hong Kong-based Insilico Medicine to bring AI-developed drugs to the global market. Insilico gets $115 million upfront, with the rest tied to milestones and royalties. The company has developed at least 28 drugs using generative AI, with nearly half already in clinical trials. The two firms have collaborated on AI-based discovery since 2023.
Apple's deal with Google gives it full access to Gemini's model weights inside Apple data centres, including the ability to distil the large model into smaller versions for on-device processing. The technique transfers knowledge from Gemini's architecture into lightweight models suited for iPhone hardware. Combined with the previously reported Siri overhaul for iOS 27, this positions Apple to run competitive AI locally rather than relying purely on cloud inference.
George Larson built a two-agent system running on the cheapest VPS available, using IRC as the communication layer. The public agent (nullclaw) handles visitor queries and can clone repos to substantiate claims with real code. The private agent (ironclaw) sits behind Tailscale with access to email and sensitive context. Model selection is deliberate — Haiku for conversational triage, Sonnet for complex tasks — keeping the daily budget under $2.
Chroma Context-1 is a 20B parameter search agent trained on 8,000+ synthetic tasks that achieves retrieval performance comparable to frontier LLMs. Its key innovation is self-editing context: the agent actively prunes retrieved documents to free up space for further search, preventing the context bloat that degrades multi-hop retrieval. The model weights and codebase are open-sourced under permissive licensing. Philipp Schmid's companion analysis, also in this edition, covers how Chroma, Cursor, and Kimi all train agentic models with RL.
RotorQuant reimagines Google's TurboQuant — which itself triggered a $100B sell-off in memory chip stocks last week — by replacing its d×d rotation matrix with Clifford algebra rotor products. The result: 44x fewer parameters and 10-31x faster computation on both NVIDIA and Apple Silicon, while matching attention fidelity on Qwen2.5-3B-Instruct. Where TurboQuant uses a sledgehammer to decorrelate vectors for quantisation, RotorQuant uses geometric algebra as a scalpel.
A security researcher decrypted 377 Cloudflare Turnstile programs from ChatGPT's network traffic. Each message triggers a fingerprinting routine that checks 55 properties across three layers: browser hardware (GPU, fonts), Cloudflare edge data (city, IP, region), and the ChatGPT React app itself (router context, loader data, bootstrap hashes). A bot that spoofs browser fingerprints but doesn't render the actual React SPA will fail. The encryption keys are embedded in the payloads they protect.
Philipp Schmid analyses three technical reports that share a common pattern: start from a strong base model, train inside the production harness, and use outcome-based rewards. Kimi K2.5's Agent Swarm learns to spawn parallel sub-agents through RL rather than hand-coded orchestration. Cursor's Composer 2 runs real-time RL from production traffic. Chroma's Context-1 learns self-editing context. All three invested heavily in asynchronous large-scale rollout infrastructure.
The H100 GPU rental market flipped from shortage to oversupply in under a year. Prices dropped from $8/hr to below $2 across seven resale platforms, driven by reserved compute hitting the secondary market, the rise of capable open-weights models like Llama 3, and fewer new foundation model startups absorbing capacity. For most workloads, renting now beats buying. The Blackwell generation may repeat this cycle even faster.
Daniel Miessler identifies five converging forces: autonomous component improvement (Karpathy's Autoresearch pattern applied everywhere), intent-based engineering (defining ideal states and letting AI close the gap), the shift from opacity to transparency, the realisation that most knowledge work is scaffolding, and expertise diffusing into public knowledge. The throughline is that articulating what you want becomes the primary bottleneck, not building it.
AI drove chess toward perfect play and a suffocating draw rate. Now grandmasters are winning by doing the opposite — playing intentionally suboptimal moves that force opponents out of engine preparation. Bloomberg profiles how players like Carlsen use anti-computer strategies: positions that look wrong to Stockfish but exploit the gap between algorithmic evaluation and human psychology. The draw rate at top tournaments has fallen since players adopted this approach.
Cognitive offloading affects adults and children differently. A 45-year-old using AI to summarise papers is experiencing atrophy — a weakened muscle that can recover. A 14-year-old who never learns to evaluate sources is experiencing foreclosure — neural pathways that were never formed. Research shows participants over 46 had higher critical thinking alongside lower AI reliance, while 17-25 year-olds showed the inverse. The distinction matters because one is reversible and the other may not be.
Sid Sijbrandij built GitLab into an $800M-revenue public company. In 2022 he was diagnosed with bone cancer in his spine. After surgery, radiation, and chemo so intense it required four blood transfusions, the cancer returned in 2024 and doctors said standard of care was done. So he applied the same approach he used to build GitLab: maximal diagnostics (25TB of sequencing data published openly on osteosarc.com), 10+ personalised treatments running in parallel including mRNA vaccines, custom antibody-drug conjugates, and CAR-T cell therapy, and 10 new companies through evenone.ventures to scale this for other cancer patients.
Stanford's Jai fills the gap between giving an AI agent your real account and stopping to build a container. Run jai claude or jai codex and your working directory keeps full read-write access while the rest of your home sits behind a copy-on-write overlay. No Dockerfiles, no bwrap flags, no images. It exists because people are already reporting wiped home directories and deleted working trees from agents given ordinary machine access.
Miasma sets up a server that traps AI web scrapers in an endless loop of poisoned content and self-referential links. Hidden HTML links invisible to humans but visible to crawlers direct scraper traffic to the trap. It integrates with Nginx, runs with minimal memory, and installs via a single Cargo command. The tool targets companies that scrape public websites at scale for training data without consent.