Daily TEA – Backup of the Backup, Invisible AI Coworkers, and Agents That Read First
NASA fault tolerance, Ramp AI adoption, heart failure prediction, Sam Altman on AGI, research-driven coding agents
Hello, dear TEA-mates! Happy Monday! Turned out this version is still preferable. :D Let’s go! Here is what you need to know today.
1. 🚀 How NASA Built Artemis II’s Fault-Tolerant Computer
NASA’s Orion capsule for Artemis II carries 2 Vehicle Management Computers, each containing 2 Flight Control Modules, totaling 8 radiation-hardened BAE Systems RAD750 CPUs running flight software in parallel. The system uses triple-redundant memory with automatic single-bit error correction on every read, triple-redundant networking with self-checking switches, and a “fail-silent” design where a faulty CPU shuts itself down rather than sending a wrong answer. A silenced module can reset, re-synchronize state with operating modules, and re-join mid-flight. As NASA’s Orion Software Integration and Verification Lead Nate Uitenbroek put it: “We can lose three FCMs in 22 seconds and still ride through safely on the last FCM.” Even during complete power loss, the spacecraft can stabilize attitude, restart power generation via solar panels, and re-establish communications through an Independent Backup Flight Software running on dissimilar hardware and a separate operating system. (Read More)
🫖 TEA For Thought: “Backup of the backup of the backup is always the way to go.”
2. 🪟 We Built Every Employee at Ramp Their Own AI Coworker
Ramp hit 99% adoption of AI tools across the company, and the secret was not better models. It was removing the setup friction. They built “Glass,” an internal AI productivity suite that auto-configures upon SSO login and connects to all company tools. No terminal windows, no MCP configurations, no onboarding docs. They also built “Dojo,” a marketplace for reusable markdown-based skills with over 350 skills shared company-wide, Git-backed, versioned, and reviewed like code. Dojo includes a built-in AI guide called “Sensei” to help people find the right skills. The key insight: Glass preserved power-user capabilities (multi-window workflows, deep integrations) rather than dumbing down the interface. Every skill shared raises the floor for everyone. (Read More)
🫖 TEA For Thought: “This is absolutely crucial: make it invisible while preserving full capability. This seems to be the thing agentic tools are trying to solve, making everything visible, breaking down the black box of AI agents and making it seen and programmable by humans. This is another example of the world model that Jack Dorsey proposed. Exciting stuff.”
3. ❤️ New AI Tool Can Predict Heart Failure at Least Five Years Before It Develops
Researchers at the University of Oxford, led by Professor Charalambos Antoniades, have built the first AI tool that can accurately predict heart failure from routine cardiac CT scans. Trained on anonymized data from over 59,000 people across nine NHS Trusts and tested on 13,424 more, the model achieves 86% accuracy in predicting heart failure risk within five years. The highest risk group is 20 times more likely to develop heart failure than the lowest. The AI identifies textural changes in the fat around the heart that indicate the heart muscle underneath is inflamed and unhealthy. These changes are invisible to the human eye in any routine medical imaging. About 350,000 patients are referred for cardiac CT scans each year in the UK, and over 1 million people currently live with heart failure. The team is seeking NHS regulatory approval for nationwide rollout and upgrading the tool to work on any chest CT scan, not just cardiac. Published in the Journal of the American College of Cardiology. (Read More)
🫖 TEA For Thought: “Isn’t this good and hopeful?”
4. 🧠 Sam Altman: Once You See AGI You Can’t Unsee It
After someone threw a Molotov cocktail at his house at 3:45 AM, Sam Altman published a personal reflection on AI, OpenAI, and himself. He calls AI “the most powerful tool for expanding human capability and potential that anyone has ever seen” with “essentially uncapped” demand. He says fear and anxiety about AI is “justified” and that society needs resilience beyond just model alignment. He argues AI must be democratized: “It is not right that a few AI labs would make the most consequential decisions about the shape of our future.” On OpenAI’s drama, he attributes it to the “ring of power” dynamic and acknowledges being conflict-averse “caused great pain for me and OpenAI.” He is proud of resisting demands for “unilateral control” and delivering on the mission: “A lot of companies say they are going to change the world; we actually did.” (Read More)
🫖 TEA For Thought: “Once you see AGI you can’t unsee it.”
5. 🔬 AI Coding Agents Get Much Better When They Read Before They Code
SkyPilot’s team added a literature search phase to their autoresearch agent loop, pointed it at llama.cpp, and ran it on 4 cloud VMs over 3 hours for about $29 total. The result: 5 optimizations that made flash attention text generation 15% faster on x86 and 5% faster on ARM. The first wave of code-only attempts (SIMD micro-optimizations, loop unrolling, prefetching) yielded 0 to 0.9% gains. The agent’s own postmortem: “Wave 1 results show that micro-optimizations in the compute path give negligible returns because text generation is memory-bandwidth bound, not compute bound.” Only after studying prior work (ik_llama.cpp, llamafile, FlashAttention paper, Intel’s cache-aware thread partitioning) did the agent find real wins like fusing three softmax passes into one and porting a CUDA/Metal graph fusion that was missing from the CPU backend. (Read More)
🫖 TEA For Thought: “AI coding agents get much better results when they have domain knowledge and background research. Ask your agents to do research first before doing anything else.”
🛠️ Tools of the Day
multica-ai/multica — Open-source managed agents platform that turns coding agents into real teammates with task assignment, progress tracking, and skill compounding. Supports Claude Code, Codex, OpenClaw, and OpenCode with a web dashboard and CLI. Self-hostable via Docker. 9.3k stars this week.
coleam00/Archon — First open-source harness builder for AI coding. Define dev workflows as YAML, get deterministic and repeatable AI-driven development. 17 built-in workflows (issue fixing, feature dev, PR review), isolated git worktrees for parallel runs, fire-and-forget operation. 17k stars.
snarktank/ralph — Autonomous AI agent loop that runs repeatedly until all PRD items are complete. Spawns fresh Claude Code or Amp instances per iteration with quality gates (typecheck + tests) enforced at each loop. Give it a PRD, walk away, come back to completed features. 15.9k stars.
TEAHEE Moment
Stay sharp, stay informed. See you tomorrow!
If you enjoyed this TEA, follow along on social for more:
Twitter/X






