SpaceX announced it has secured the right to acquire AI coding startup Cursor for $60 billion, with the option to complete the purchase later this year or pay $10 billion for the collaboration. The deal comes as SpaceX prepares for one of the largest IPOs ever. Cursor raised at a $50 billion valuation just weeks ago. The two companies say they are already working closely on coding and AI.
SpaceX Eyes Cursor for $60B, ChatGPT Images 2.0 Ships
Bezos raises $10B for Project Prometheus, GitHub pauses Copilot sign-ups, MCP security flaw
GitHub is pausing new sign-ups for Copilot Pro, Pro+, and Student plans while tightening usage limits for existing subscribers. The company says agentic workflows have fundamentally changed compute demands, with long-running parallelised sessions consuming far more resources than the original plan supported. Opus models are no longer available on Pro. Users who hit limits can upgrade to Pro+ for 5x the allocation, or cancel for a full April refund.
Anthropic appears to have removed Claude Code access from its $20/month Pro subscription for new users, based on changes to its support documentation. The page previously read "Using Claude Code with your Pro or Max plan" and now says "Max plan" only. Some existing Pro users report they still have access. Anthropic's head of growth Amol Avasare clarified on X that this is a 2% A/B test, not a full rollout.
A DIY biohacker used an M3 Ultra Mac Studio, Claude, and an Oxford Nanopore MinION sequencer to sequence their own genome at home, driven by a family history of autoimmune disease. The setup costs roughly $1,100 per run and requires 100GB of storage. They used Claude to help interpret the raw data and work through bioinformatics tooling. Not clinically rigorous, but a striking demonstration of how consumer hardware and AI are lowering the barrier to genomics.
Jeff Bezos is closing a $10 billion funding round for Project Prometheus, an AI lab valued at $38 billion. JPMorgan and BlackRock are among the investors. The lab builds AI systems designed to understand the laws of physics for use in industry, engineering, and manufacturing. It marks Bezos' first operational role since stepping down as Amazon CEO in 2021. The round started at $6.2 billion and expanded due to demand.
Gemini CLI now supports subagents, letting the main agent delegate work to specialised sub-agents that run in parallel with their own instructions, tools, and context. A developer can spin up a frontend specialist, test writer, and docs agent simultaneously instead of working through tasks sequentially. Each subagent runs in isolation, keeping context contained and avoiding the overloaded single-session problem that bottlenecks complex coding workflows.
OpenAI's new image model reasons through visual tasks before generating output. Images 2.0 can search the web for reference, produce up to eight coherent images from a single prompt, and cross-check its own results. A "Thinking" mode maintains character consistency across frames, opening up storyboarding and multi-scene workflows. The model ships as gpt-image-2 via the API, with advanced features on Plus, Pro, and Business tiers.
Alibaba's Qwen team released an early preview of their next proprietary model. Qwen 3.6 Max Preview posts the top score on six coding benchmarks including SWE-bench Pro and Terminal-Bench 2.0, with substantial gains over Qwen 3.6 Plus in agentic coding (SkillsBench +9.9, SciCode +6.3). The model also improves world knowledge and instruction following. Available now on Qwen Studio, with API access via Alibaba Cloud Model Studio coming soon.
Security researchers at OX Security found an architectural vulnerability in Anthropic's Model Context Protocol that allows remote command execution through legitimate protocol behaviour. The flaw affects over 150 million downloads and more than 200,000 public servers. The team executed commands on six live production platforms and bypassed security checks on 9 of 11 major MCP marketplaces. Because the vulnerability exists at the protocol layer, patching individual servers doesn't fix it.
A team replaced GPT-4 with a locally hosted small language model for metadata extraction in their nightly batch pipeline. GPT-4 produced valid JSON roughly 85% of the time despite elaborate system prompts. The local SLM, fine-tuned on 300 labelled examples and constrained with grammar-based decoding, hit 99.6% valid output. The article walks through the full migration: dataset creation, fine-tuning with QLoRA, and deploying behind vLLM with structured output guarantees.
Google Research introduces ReasoningBank, a memory framework that distils generalised reasoning strategies from an agent's successful and failed experiences. Rather than storing raw interaction logs, the system extracts reusable patterns that transfer across tasks. The framework enables agents to continuously improve after deployment without retraining the underlying model. The team released both the paper and code.
Allen AI introduces BAR (Branch-Adapt-Route), a modular post-training recipe that trains independent domain experts through their own complete pipelines and composes them into a unified model via mixture-of-experts. Each expert can be developed, upgraded, or replaced without touching the others. The approach solves a persistent problem: retraining from scratch is expensive, but training further on new data causes the model to lose capabilities it already had.
Morgin.ai researchers tried to fine-tune an "uncensored" Qwen model to simulate a White House press secretary and found it kept softening specific words regardless of tuning. They measured the gap between the probability a word deserves on pure fluency grounds and the probability the model actually assigns it, calling this the "flinch." Across seven pretrains from five labs, safety-filtered models consistently suppressed charged terms without triggering any visible refusal.
Ed Zitron identifies four signs the AI bubble is beginning to unwind. He examines NVIDIA's dependency on a small number of hyperscaler customers, the widening gap between AI capital expenditure and revenue, the pattern of companies announcing AI savings that never materialise in earnings, and the growing disconnect between AI demo performance and production reliability. Each "horseman" gets specific financial data rather than vibes.
Sequoia investor Julien Bek wrote a blog post arguing the next trillion-dollar company won't sell software as a product, it will sell outcomes using AI to deliver services at software margins. The post hit 3 million views on X. AI-native startups can deliver customer service, accounting, or legal work directly rather than selling tools to the people who do that work. Bek sees business process outsourcing as the sector most at risk.
ChinaTalk's analysis of the Jensen-Dwarkesh conversation zeroes in on chip export controls. The piece argues the Trump administration has had zero movement on tooling restrictions despite teasing them, and that only Nvidia and AMD benefit from AI chip exports to China. A detailed table walks through each layer of the semiconductor stack to show which companies actually win from exports. Congress may take matters into its own hands with the MATCH Act.
Ethan Ding presents labour economics data showing AI coding productivity gains follow a K-shape: senior engineers get meaningfully more productive while junior engineer output goes flat or declines. The piece challenges the dominant narrative that AI coding agents change everything, arguing that the loudest voices are venture-backed founders clearing backlogs while the engineers who built the products tell a more complicated story.
Ctx is a local context manager that binds AI conversation history to named workstreams, letting you resume exactly where you left off across both Claude Code and Codex sessions. Each workstream tracks which specific conversation it came from, preventing transcript drift when you have multiple chats open. You can branch workstreams without mixing contexts, pin or exclude saved entries, and search across everything via a local browser frontend. All local, SQLite-backed, no API keys.
OpenSpec is an open-source framework for spec-driven development with AI coding agents, now at 41,000 GitHub stars. Instead of prompting an agent with natural language and hoping it infers your intent, you write a structured specification that the agent follows deterministically. The project provides a TypeScript SDK for defining specs and integrates with Claude Code, Cursor, and other coding agents. It fills the gap between developer intent and agent execution.
Prefect's FastMCP library has reached 24,000 GitHub stars as the go-to Python framework for building MCP servers and clients. It wraps the protocol's complexity behind a clean, decorator-based API similar to FastAPI. Define tools with type-annotated functions, add a decorator, and you have a working MCP server. The library handles transport, serialisation, and error handling, and integrates with OpenTelemetry, LangChain, and the OpenAI SDK out of the box.