Daily TEA – The 12-Month Window
exit windows, hardware wars, agent search, China chips, shared memory
Hello, dear TEA-mates! Here is what you need to know today.
1. ⏳ The 12-Month Window: When AI Founders Should Sell
On a recent No Priors episode, AI investor Elad Gil told co-host Sarah Guo that most companies have roughly a 12-month peak window before value crashes, and the firms that capture generational returns are the ones that catch that moment instead of assuming good times will keep compounding. He cited Lotus, AOL, and Mark Cuban’s Broadcast.com as classic cases of pulling the ripcord at the top. Gil’s practical tip for founders: pre-schedule a board meeting once or twice a year dedicated specifically to discussing exits, which drains emotion out of the decision. He framed this as especially urgent for AI startups, many of which exist only because foundation models have not yet expanded into their category, a gap that Deel CEO Alex Bouaziz and others have openly acknowledged will not last. Gil’s framing question: “Are these next six months when I’m going to be the most valuable I’ll ever be?” (Read More)
🫖 TEA For Thought: “The gap is only getting wider.”
2. ⌚ Fitbit Air: Google Rebrands Health Stack Around the “G”
9to5Google reports that Google’s upcoming screen-less health band, teased at the end of March and already worn by Stephen Curry, will launch as the “Google Fitbit Air.” The “Air” name nods to the wearable’s thinner, all-day-wear design, echoing the 2019 Aria Air scale. More significantly, the Fitbit Premium subscription is being rebranded as “Google Health,” and the personal health coach currently in public preview will be called “Google Health Coach.” Through the testing period, the software has been branded only as “Coach,” with no Fitbit tag. The move pulls wellness features into the core Google brand rather than Fitbit, and it explains why Curry’s teaser ended on a gradient “G” with no Fitbit logo. An official announcement is expected in the coming weeks. (Read More)
🫖 TEA For Thought: “Hardware is the hard problem for big tech. The competition will only get more severe.”
3. 🔎 Exa’s Canon: Composing a Search Engine as a DAG
Exa published a technical post on Canon, the search pipeline orchestrator it built to run billions of search requests for AI agents. The team defines the entire search pipeline as a DAG of typed nodes (retrieve, classify, localize, rank), so the executor can fire each node concurrently the moment its inputs are ready. A single query in 250ms runs classification, localization, retrieval, and ranking across multiple indexes in parallel, with every node observable for debugging. Exa argues this composable approach is essential now that most pipeline code is written by agents: agents produce locally correct code but struggle with global constraints, so the DAG contract forces each node to be independently reasoned about. The post details how a simple query has grown into a graph of 20-plus node types serving thousands of AI agents with per-customer branches. (Read More)
🫖 TEA For Thought: “Isn’t this just looking more and more like blockchain? The convergence is happening.”
4. ☠️ Nvidia’s China Share Falls Below 60% as Domestic Chips Surge
IDC data reported by Reuters shows Chinese semiconductor firms captured 41% of the domestic AI server market in 2025, delivering 1.65 million AI GPUs out of roughly 4 million total units. Nvidia still leads at 55% (about 2.2 million cards), a sharp contraction from its pre-sanctions 95% share. Huawei is the biggest domestic winner with 812,000 chips shipped (nearly 20% of the market) and just launched its Atlas 350 accelerator, which it claims delivers roughly three times the performance of Nvidia’s H20. Alibaba’s T-Head placed third at 256,000 units, AMD shipped 160,000 (4%), and Baidu’s Kunlunxin and Cambricon rounded out the top five at 116,000 each. Trump reversed the H20 and MI308 ban in July 2025 and allowed H200 shipments to China in December 2025, but Beijing is actively pushing data centers toward domestic chips to close the five-to-ten-year gap in AI data center silicon. (Read More)
🫖 TEA For Thought: “China is really speeding up the full stack of AI: chips, infra, cloud, models, and apps. The gap is closer than ever.”
5. 🧠 Memory Transfer Learning: Cross-Domain Memory for Coding Agents
A new arXiv paper by Kangsan Kim, Minki Kang, Taeil Kim, Yanlai Yang, Mengye Ren, and Sung Ju Hwang (submitted April 15, 2026) introduces Memory Transfer Learning (MTL), a framework for coding agents that pools memory across heterogeneous task domains instead of restricting it to a single domain silo. The authors evaluate four memory representations (from concrete execution traces to abstract insights) across six coding benchmarks and find that cross-domain memory improves average performance by 3.7%, primarily by transferring meta-knowledge such as validation routines rather than task-specific code. A key finding: abstraction dictates transferability. High-level insights generalize well, while low-level traces often cause negative transfer due to excessive specificity. Transfer effectiveness also scales with memory pool size, and memory can move between different models. Project page at memorytransfer.github.io. (Read More)
🫖 TEA For Thought: “The shared memory between all agents in any org, consolidated memory, dreaming, is what everyone’s working on.”
🛠️ Tools of the Day
lsdefine/GenericAgent — Self-evolving agent that achieves system-level control with reduced token usage. ~3.5K stars this week, GitHub trending.
TEAHEE Moment
Stay sharp, stay informed. See you tomorrow.
If you enjoyed this TEA, follow along on social for more:
Twitter/X







