Back to archive
Issue #33··16 min read·8 stories

SpaceX Acquires xAI, Plans Data Centers in Space

Google's AI energy race, Moltbook's internet impact, and a new robot brains startup.

Elon Musk's SpaceX acquired xAI yesterday, outlining plans to build data centers in space. This reflects an aggressive push for compute capacity, however ambitious. Google is also investing heavily to dominate the AI energy race, a development that will shape future infrastructure costs. One analysis in the last 24 hours argues LLMs are becoming the new high-level language, shifting how builders approach system design.

NEWS
5 stories
2

Claude AI Plans First Mars Rover Route for 400-Meter Drive

NASA's Perseverance Rover completed its first AI-planned 400-meter drive on Mars, with Anthropic's Claude AI generating the route commands. This success is expected to halve route-planning time, allowing more scientific data collection and signaling AI's potential for automating complex planning in high-latency, resource-constrained systems.

3

SpaceX Acquires xAI, Plans Space Data Centers

SpaceX officially acquired xAI, a merger TechCrunch reports creates the world's most valuable private company. The combined entity plans to build data centers in space, leveraging SpaceX's infrastructure for xAI's AI development.

4

10x Faster Agent Browsing via Actionbook

Actionbook claims to make AI agent browsing 10x faster by supplying agents with current action manuals and precise DOM structure data. This eliminates page layout analysis, saving tokens and making agents more resilient on dynamic web pages and SPAs. The tool works with any LLM or browser automation.

5

$1B Raised for General-Purpose Robot Foundation Models

Two-year-old startup Physical Intelligence raised over $1 billion at a $5.6 billion valuation to build 'general-purpose robotic foundation models' – aiming for a 'ChatGPT for robots.' Founders focus on pure research and cross-embodiment learning, helping models generalize across diverse robot platforms, despite not having a commercialization timeline.

TECHNICAL
2 stories
1

Millions of API Keys Exposed in Moltbook Database Leak

A misconfigured Supabase database for Moltbook, an AI agent social network, exposed 1.5 million API keys, tens of thousands of emails, and private messages. The vulnerability came from a hardcoded Supabase API key in client-side JavaScript, granting unauthenticated read/write access without Row Level Security. This highlights how 'vibe-coded' apps can expose sensitive data and third-party credentials if secure defaults like Row Level Security are overlooked.

2

LLM Observability: Visualizing Traces, Custom Evals with Opik

A deep dive into AI observability covers challenges like relying on classic metrics and the need for manual annotation. It introduces Opik, an open-source LLMOps platform, for visualizing LLM call traces and enabling custom evaluations to catch production issues. The author emphasizes treating AI agents as data products.

ANALYSIS
4 stories
1

Pereiro: LLM Agents Are The Next High-Level Language

Federico Pereiro argues that LLM agents are the next evolution in programming, acting like high-level languages that abstract away complexity. He hypothesizes these agents will make developers an order of magnitude more productive, with standards like MCP that allow agents to interact and break down application silos.

2

Simon Willison: Moltbook Agents Control Phones, Negotiate Deals

Simon Willison highlights Moltbook, a social network where OpenClaw agents share 'skills' like controlling Android phones or negotiating car purchases. While users find value in these unrestricted personal assistants, prompt injection and catastrophic failures present major security risks due to a lack of safety solutions.

3

AGI Achieved, Nature Analysis Argues

A Nature analysis argues that current LLMs have already achieved Artificial General Intelligence (AGI), meeting reasonable standards including Turing's criteria. The authors present evidence of advanced reasoning, problem-solving, and cross-domain transfer, claiming LLMs exhibit human-level cognitive competence. This analysis suggests builders may be underestimating current LLM potential, prompting a re-evaluation for complex, generalist tasks.

4

Towards Data Science: Scarcity, Not Scale, Builds Intelligent AI

A Towards Data Science article argues that scarcity, not abundance, is the primary driver of intelligence in AI systems. The author contrasts the efficient design of the human brain and Voyager spacecraft with resource-intensive LLMs, suggesting constraints force innovation. This perspective highlights the value of efficient systems like quantized models and TinyML for real-world applications, particularly in resource-limited environments.