Daily TEA – Maine Hits Pause on AI Data Centers, Neuro-Symbolic AI Cuts Energy 100x, Your Harness is Your Memory, the Composable Coding Stack, and Claude Meets the Clergy
Maine’s data center moratorium, Tufts’ 100x energy cut, LangChain on harnesses and memory, Cursor plus Claude Code plus Codex, Anthropic consults Christian leaders
Hello, dear TEA-mates! Here is what you need to know today.
1. 🏭 Maine Moves to Pause New AI Data Centers Until 2027
Maine’s legislature approved bill text in both chambers that would pause new AI data center construction in the state until November 1, 2027, with a final vote expected by April 15. If signed, it would be the first statewide moratorium in the US. Business Insider tracked 12 similar state bills introduced in 2026 across Georgia, Maryland, Michigan, New Hampshire, New York, Oklahoma, South Carolina, South Dakota, Vermont, Virginia, Wisconsin, and Maine, with only Maine’s advancing. The US currently has 4,000 data centers with 3,000 more proposed or under construction per the American Edge Project. Virginia, home to the world’s largest data center concentration, punted its moratorium to 2027, and its sales tax exemption cost the state $1.9 billion in fiscal 2025. Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez introduced a federal moratorium bill in March. Local moratoriums have also passed in East Lansing, Tulsa, and Port Washington, Wisconsin. (Read More)
🫖 TEA For Thought: “If this trend continues, and it will, data centers will have to find the next paradigm. The hardware-limited GPU and TPU might become obsolete. The ones not limited by physical hardware might be the next paradigm.”
2. ⚡ Tufts’ Neuro-Symbolic AI Cuts Training Energy to 1% of Standard Models
Tufts University researchers led by Professor Matthias Scheutz unveiled a neuro-symbolic AI system that combines neural networks with symbolic reasoning, cutting training energy to 1% of standard visual-language-action (VLA) systems and operational energy to 5%. On the Tower of Hanoi puzzle, the hybrid system hit 95% success versus 34% for conventional VLAs, and 78% on an unseen harder variant where traditional models failed every attempt. Training dropped from more than a day and a half to 34 minutes. The work will appear at the International Conference of Robotics and Automation in Vienna in May and is posted on arXiv (Duggan, Lorang, Lu, Scheutz, Feb 22 2026). Context: the IEA reports AI systems and data centers used 415 terawatt hours in 2024, over 10% of US electricity, with demand projected to double by 2030. Scheutz noted a Google AI summary can consume up to 100 times more energy than generating the underlying website listings. (Read More)
🫖 TEA For Thought: “This is the flywheel effect. The more advanced the tech is, the more energy it can save using its own energy.”
3. 🧠 LangChain: Your Harness Is Your Memory
LangChain’s Harrison Chase argues that agent harnesses (Claude Code, Deep Agents, Pi/OpenClaw, OpenCode, Codex, Letta Code) are now the dominant way to build agents and are tightly coupled to memory. Citing the leaked 512k lines of Claude Code source as evidence of how much harness code exists, he warns that closed harnesses behind proprietary APIs (Anthropic’s Claude Managed Agents, OpenAI’s Responses API with server-side compaction, Codex’s encrypted compaction summaries) create platform lock-in by owning long-term memory. Short-term memory (conversation, tool results) and long-term cross-session memory both live inside the harness, covering CLAUDE.md loading, skill metadata, compaction rules, and filesystem exposure. Chase shares an anecdote about his accidentally-deleted Fleet email assistant requiring full re-teaching of tone and preferences. LangChain’s answer is Deep Agents: open source, model-agnostic, uses agents.md and skills standards, with Mongo, Postgres, and Redis plugins for memory storage and self-hostable deployment via LangSmith. (Read More)
🫖 TEA For Thought: “Security is more critical and crucial than ever.”
4. 🧱 Cursor, Claude Code, and Codex Are Merging Into One AI Coding Stack
Cursor, Claude Code, and OpenAI Codex are assembling into a composable coding stack with three distinct layers rather than consolidating into one tool. On April 2, Cursor launched version 3 with a dedicated Agents Window for running parallel agents across local machines, worktrees, and cloud sandboxes, plus a /best-of-n command that sends the same prompt to multiple models. Three days earlier, OpenAI published codex-plugin-cc on GitHub, an Apache 2.0 plugin that runs Codex directly inside Anthropic’s Claude Code with six slash commands including /codex:adversarial-review. According to a Pragmatic Engineer survey of 906 engineers, Claude Code holds a 46% most-loved rating, and SemiAnalysis estimates it accounts for roughly 4% of public GitHub commits. Codex passed 3 million weekly active users in March 2026. The stack splits into orchestration (Cursor), execution (Claude Code, Codex), and cross-provider review. (Read More)
🫖 TEA For Thought: “This isn’t going to be the case, model. In order to survive, OpenAI and Anthropic have to do something to get themselves built into the ecosystem. Otherwise, no matter how good they are, they are eventually going to be replaced.”
5. ✝️ Anthropic Consulted Christian Leaders on Claude’s Morals
Anthropic, valued at $380 billion, quietly sought guidance from a group of Christian religious leaders last month on how Claude should handle moral and spiritual questions, according to The Washington Post. The consultation is a rare case of a frontier AI lab turning to clergy rather than academics or policy experts, and reflects growing internal debate about whose values should shape the behavior of models used by hundreds of millions of people. Anthropic has previously described its constitutional AI approach as drawing on sources including the UN Declaration of Human Rights, but the Christian consultation suggests the company is widening its input channels as Claude is increasingly asked questions about meaning, ethics, and faith. The meeting followed growing religious attention to AI, including statements from Vatican officials and evangelical leaders on AI’s moral risks. (Read More)
🫖 TEA For Thought: “Hey, the fear of God is the beginning of wisdom. I wonder if you know all the models are designed to fear God. What is God? What is truth? These are definitely some deep questions to discuss. But who gets to decide whether AI is a child of God or not?”
🛠️ Skills of the Day
andrej-karpathy-skills — A CLAUDE.md file that tunes Claude Code behavior using Andrej Karpathy’s observations about how LLMs code. First-principles prompting, deterministic scaffolds, and reproducible agent loops. +7,319 stars this week on GitHub trending.
openai/codex-cli — OpenAI’s terminal-native coding agent, now composable into Claude Code via codex-plugin-cc. Supports cloud sandboxes, long-running async tasks, and adversarial review gates. Crossed 3M weekly active users in March 2026.
google-ai-edge/gallery — Google’s on-device AI showcase app with Gemini Nano and other edge models running fully offline on Android. Demos include image captioning, text generation, and local voice assistants. +4,148 stars this week. Pairs perfectly with today’s Tufts efficiency story.
TEAHEE Moment
Stay sharp, stay informed. See you tomorrow.
If you enjoyed this TEA, follow along on social for more:
Twitter/X







