Daily TEA – Gemini Goes Air-Gapped, Noscroll Wants $10
private AI boxes, minimal editing, UK labour data, search that still matters
Hello, dear TEA-mates! Here is what you need to know today.
1. 📰 Noscroll Is an AI Bot That Doomscrolls For You
TechCrunch profiled Noscroll, a newly launched startup from former OpenSea CTO Nadav Hollander that sells an AI agent trained to browse X, Reddit, Hacker News, Substack, news sites, and research papers, then text users a digest via SMS at (415) 718-4828. Users authenticate their X account, tell the bot what topics they care about in natural language, and receive summaries at their preferred cadence (weekly for casual users, several per day for news junkies). The bot uses off-the-shelf models on proprietary infrastructure, can be added to group chats and Telegram, and pushes breaking news alerts. Pricing is $9.99 per month after a 7-day free trial, with variable pricing under consideration. Hollander built it with an open-source developer known as @z0age, and reports early users tracking niche anime news, Kyoto restaurant openings, layoff data, and local politics. The company has already drawn investor interest. (Read More)
🫖 TEA For Thought: “This is pretty much a new scraper. The backend is very simple, and I’m sure anyone who has played with OpenClaw already has this set up. I just wonder how many people would actually spend $10 buying this when they could set it up for free on their own computer. Interesting product, but easy.”
2. 🔒 Gemini Can Now Run On A Single Air-Gapped Server
At Google Cloud Next 2026 in Las Vegas, Cirrascale Cloud Services announced it will deliver Google’s full Gemini model on-premises through Google Distributed Cloud, making it the first neocloud provider to offer the model as a fully private, disconnected appliance. The product packages Gemini into a Dell-manufactured, Google-certified box with eight Nvidia GPUs (Cirrascale built the world’s first 8-GPU server in 2012) wrapped in confidential computing protections. Gemini resides entirely in volatile memory: pulling power wipes the model, and tampering triggers a self-destruct that forces the hardware back to Cirrascale, Dell, or Google. CEO Dave Driggers stressed, “It is full blown Gemini. It’s not pulled. Nothing’s missing from it.” Preview is live now, with general availability in June or July. Pricing options include seat-based, per-token, flat all-you-can-eat per appliance, or hardware purchase outright. Targets include financial services, drug discovery, medical data, public sector, and countries without GCP presence. Industry projections cited in the piece estimate 40% of AI training and inference will move outside public cloud by 2027. (Read More)
🫖 TEA For Thought: “More like this will happen. After all, all people care about is privacy and security, and as long as data travels in the cloud, however it’s handled, it’s not safe. Hardware is still the hard block.”
3. 🔍 Perplexity On Why Search Still Matters In The AI Era
Perplexity’s two-stage post-training pipeline for developing high-performance search-augmented language models is outlined in this research article by disentangling deployment constraints from search optimization. The process begins with Supervised Fine-Tuning (SFT) to establish foundational behaviors like guardrails and formatting, followed by Reinforcement Learning (RL) using Group Relative Policy Optimization (GRPO) to enhance factual accuracy and tool-use efficiency. By combining verifiable search-agent QA data with rubric-based general chat data and implementing gated reward aggregation with anchored efficiency penalties, Perplexity successfully improved model performance—achieving higher accuracy and better alignment with user preferences—while significantly reducing operational costs and unnecessary tool usage compared to larger models like GPT-5.4. (Read More)
🫖 TEA For Thought: “Who would have thought that in the age of AI, search is still needed? It’s just in a different way. Instead of us indexing and searching, now it’s all powered by AI, but in the search, accuracy is still very, very important. Perplexity wins not only by the model but also by the harness on top of all of the models. It’s pretty brilliant.”
4. ✂️ The Over-Editing Problem In AI Coding Tools
Researcher nrehiew benchmarked 9 frontier models on a minimal-editing task using 400 programmatically corrupted problems from BigCodeBench. Models were asked to fix single-token bugs (off-by-one errors, flipped operators, boolean swaps), and scored on Pass@1, token-level Levenshtein distance, and added Cognitive Complexity. GPT-5.4 over-edited the most (Levenshtein 0.395 reasoning, 2.31 added complexity) with only 0.723 Pass@1. Claude Opus 4.6 produced the smallest diffs and the highest Pass@1 (0.912 reasoning, Levenshtein 0.060). Reasoning models over-edited more than their non-reasoning counterparts by default, but responded better to explicit prompts to preserve original code. The author then trained Qwen3 4B variants with SFT, rSFT, DPO, and RL: SFT scored perfectly in-domain but collapsed out-of-domain (Pass@1 dropped to 0.458, LiveCodeBench down 14.9%). RL was the only method that generalized cleanly, improving all three metrics with no catastrophic forgetting. LoRA at rank 64 nearly matched full RL, and the recipe scaled to Qwen3 14B (Pass@1 0.833, Levenshtein 0.059). (Read More)
🫖 TEA For Thought: “This is so interesting. When you don’t know what you don’t know and only AI does, I guess that’s the problem. How do we make sure we only make minimal changes, which Opus 4.6 does very well? I guess that’s why they’re so good at coding and it’s engineers’ favorite. There is a reason for it. Over-engineering is definitely a problem, not only wasting tokens but also making it harder for humans to review. Humans have to stay in the loop. However, I wonder how long this can keep going, because when everybody is shipping so fast, what time is even left for humans to review the code? We don’t even have time to review all the plans in detail sometimes.”
5. 🇬🇧 UK Labour Market: No Evidence AI Has Replaced Jobs At Scale
A British Progress report analyzed Annual Population Survey data across 412 UK occupations and found no difference in employment trends between occupations most and least exposed to AI. Office for National Statistics payroll data shows wages in high-exposure occupations have grown more slowly since 2019, but the trend predates ChatGPT and cannot be cleanly attributed to AI. Hours worked in AI-exposed occupations have risen modestly relative to unexposed ones, consistent with augmentation raising demand. Inside high-exposure occupations, programmer and finance analyst roles have continued to grow while administrative and clerical roles have contracted, indicating the same AI exposure can produce augmentation or displacement depending on task structure. Adoption data shows usage is concentrated in a small set of tasks: roughly one-fifth of tasks account for the vast majority of usage. The authors stress these findings do not rule out larger future effects, but plausible near-term predictions should be constrained by current adoption patterns, not projected capabilities. (Read More)
🫖 TEA For Thought: “Very interesting report. If you just look around and talk to folks on the street, you will know how big the gap is between those in AI and those who aren’t. However, is this AI revolution different from electricity? I think it is. AI is revolutionizing all of the games that we play. It’s a knowledge revolution, a productivity revolution. It is how humans work. Human intelligence is redefined by AI, whereas electricity was just about how people work. I would still say there is a difference, even though both are paradigm shifts.”
🛠️ Skill of the Day
openai/privacy-filter — OpenAI’s new bidirectional token-classification model for PII detection and masking. 1.5B parameters (50M active via mixture-of-experts), 128K context, Apache 2.0 license, runs in a browser or laptop. Detects 8 span categories (account numbers, addresses, emails, names, phones, URLs, dates, secrets) in a single forward pass with constrained Viterbi decoding. Fine-tunable via opf train, supports CPU or GPU, pipe-friendly CLI (opf redact), and ships with model weights on Hugging Face.
TEAHEE Moment
Stay sharp, stay informed. See you Sunday!
If you enjoyed this TEA, follow along on social for more:
Twitter/X







