Back to archive
Issue #20··24 min read·12 stories

Claude Sonnet 4.6 is here; Apple's AI Wearables

1M-token Sonnet 4.6, Claude Code to Figma, and an evals flywheel that starts with 20 traces.

AI writing goes bland when models sand off high-entropy detail (semantic ablation). Today: Sonnet 4.6 ships 1M context, Composio gives agents 100+ tools, and a pragmatic evals loop that starts from your failures.

NEWS
4 stories
2

Apple Prepares AI-Centric Wearables: Glasses, Pendant, and Camera AirPods

Apple is reportedly accelerating development of several new AI-focused hardware products, including smart glasses, a pendant-style device, and AirPods with integrated cameras. These devices are designed to leverage AI, signaling a shift towards more ambient and personalized AI interactions.

4

Meta bets big on NVIDIA: Blackwell now, Rubin next

Meta is expanding its AI infrastructure by deploying millions of NVIDIA's upcoming Blackwell and Rubin GPUs, alongside NVIDIA CPUs and Spectrum-X Ethernet switches. This multiyear partnership aims to optimize Meta's data centers for both training and inference, integrating NVIDIA's full platform. Networking and CPUs are now first-class in inference clusters.

TECHNICAL
3 stories
1

Computational smell: ML models that map molecules to odour signatures

Companies like Google and Osmo are using ML to profile scents and search for novel molecules. This digital approach to fragrance design could reduce reliance on resource-intensive natural ingredients, and find synthesized molecules capable of evoking specific brain patterns. Builders could use this for dataset creation or representation learning.

3

Generate agent skills after success (not for one-offs)

Do the task with iteration first, then distil the final successful trajectory into a reusable skill prompt. Generating skills before completion can codify initial mistakes, while generating after success captures emergent knowledge.

ANALYSIS
3 stories
1

Why AI prose goes bland: semantic ablation strips high-entropy detail

AI models systematically strip out high-entropy information, leading to generic output. This 'semantic ablation,' partly driven by decoding and alignment pressures, flattens language and structure, creating a 'JPEG of thought' that lacks original data density. Force high-entropy inputs: specific constraints, concrete examples, and keep your own voice fingerprints.

2

AI Coding Agents Fueling 'Token Anxiety' Slot Machine Effect

AI coding agents may be creating 'token anxiety,' a phenomenon where constant prompting and babysitting AI code resembles gambling. This, coupled with management pressure for productivity, could push workers into an addictive cycle, blurring human effort and machine output. Expect work-life balance and ethics to be tested as this trend normalizes.

3

Inference costs are becoming a real budget line (and a comp conversation)

Tunguz says AI inference expenses ballooned from $200/month to over $100k annually. Migrating to an open-source model saved 88%. This shift makes inference costs a major compensation factor. Builders can add token budgets, per-route model selection, caching, and eval-based regression gates.

TOOLS
2 stories
2

A lightweight "GSD" workflow for AI coding assistants

This GitHub repo offers a lightweight system for meta-prompting, context engineering, and spec-driven development. It's an opinionated wrapper that packages context engineering, subagents, and state so you run a few commands and verify outputs.