Daily TEA – Agent Templates, Vine Returns, and Talking to 1930
ElevenLabs ships agent kits, Dorsey reboots Vine, and a 13B model from before WWII
Hello, dear TEA-mates! Here is what you need to know today.
1. 🎧 ElevenLabs Ships Agent Templates
ElevenLabs introduced Agent Templates on its ElevenAgents platform, releasing pre-built conversational agent frameworks that target customer support, onboarding, sales, feedback collection, and front desk operations. Each template ships with predefined system prompts, workflows, and integration scaffolding so teams can launch agents without writing from scratch. The release is available to all ElevenLabs users through the ElevenAgents dashboard and includes adaptable conversation flows plus business-tool integrations and pre-built logic for tasks like lead qualification and appointment scheduling. ElevenLabs positions the templates as a way to compress setup time and reduce the trial-and-error phase typically required for production-ready agents, with early enterprise feedback emphasizing flexibility and faster ramp-up. (Read More)
🫖 TEA For Thought: “Agent template as a service might be something that vertical models do.”
2. 📱 Jack Dorsey Backed Vine Reboot Divine Launches
Divine, a Vine reboot funded by Jack Dorsey’s nonprofit “and Other Stuff,” launched publicly on the App Store and Google Play with roughly 500,000 restored Vine videos from nearly 100,000 original creators. The project was led by Evan Henshaw-Plath (Rabble), an early Twitter employee, who reconstructed the archive from 40 to 50 GB binary backups preserved by the Archive Team. Divine is built on the Nostr open social protocol and is experimenting with the AT Protocol and ActivityPub. To block AI-generated content, the app requires users to either record videos directly in the app or verify uploads via the C2PA content provenance standard. The app has no revenue model, runs as a public benefit corporation, and is rolling out via waitlist invite codes, with early Viners including Lele Pons, JimmyHere, and Jack and Jack already onboard. (Read More)
🫖 TEA For Thought: “A big come back!”
3. 🧠 A New Type of Neuroplasticity Rewires the Brain
Neuroscientists have described a new form of neuroplasticity called behavioral timescale synaptic plasticity (BTSP), detailed in two recent reviews in The Journal of Neuroscience and Nature Neuroscience. Unlike classic Hebbian plasticity, which strengthens connections when neurons fire within milliseconds of each other, BTSP works across six to eight seconds and can encode a memory after a single experience. Jeffrey Magee’s team at Baylor College of Medicine first observed it in 2014 while recording rodent hippocampal place cells, finding that a single dendritic plateau potential tuned a cell to fire at a specific location 99.5% of the time. Magee named the mechanism in a 2017 Science paper, and it has since faced pushback before gaining traction. Researchers including Attila Losonczy and Anant Jain say BTSP may explain one-shot learning (such as remembering where a predator is) and may help solve the credit assignment problem, with early evidence pointing to eligibility-trace tags and the CaMKII protein as the underlying machinery. (Read More)
🫖 TEA For Thought: “This piece of news should humble all of us who are in the realm of AI. No matter how much we think we know, we are still trying to understand the brain, let alone recreate it. No matter how great AI is, it is artificial intelligence after all. It’s artificial.”
4. 📚 Talkie, a 13B Vintage Language Model from 1930
Nick Levine, David Duvenaud, and Alec Radford introduced talkie-1930-13b, a 13B-parameter language model trained on 260B tokens of pre-1931 English text including books, newspapers, periodicals, scientific journals, patents, and case law. They built a “modern twin” trained on FineWeb (web data) for comparison, and ran HumanEval, finding the vintage model dramatically underperforms on coding but improves with scale. The team uses a document-level n-gram anachronism classifier to filter post-1930 leakage, plus a custom post-training pipeline that fine-tunes on instruction-response pairs generated from etiquette manuals, letter-writing manuals, cookbooks, and encyclopedias, followed by online DPO with Claude Sonnet 4.6 as judge. They are training a GPT-3-level vintage model for release this summer and estimate the corpus can grow past one trillion historical tokens. The project runs a 24/7 live feed of Claude Sonnet 4.6 prompting talkie, with funding and compute from Coefficient Giving and Anthropic. (Read More)
🫖 TEA For Thought: “This is so cool. If there’s enough data and books and materials about the people in the past, we can literally recreate them and talk to them. Perspective is a blessing.”
5. 👗 Google Photos Builds the Clueless Closet
Google Photos announced a new AI-powered feature that turns photos of clothes into a digital closet for creating outfit ideas and virtually trying them on, taking direct inspiration from Cher’s wardrobe in Clueless. The feature scans your Google Photos library, identifies tops, bottoms, jewelry, and other categories, and lets you mix and match into outfits that can be shared with friends or saved to a moodboard organized by occasion (travel, events, date nights, work). A virtual try-on layer previews how looks come together. The feature rolls out to Android later this summer and to iOS afterward under “Collections,” and will compete with existing apps like Acloset, Combyne, Pureple, Whering, and Alta. Google did not detail the underlying model but says results improve with well-lit, full-body source photos. (Read More)
🫖 TEA For Thought: “Pretty cool idea. Google is now on fire. With the hardware including chip, phone, computers, TVs, all the way to infra, models, and apps. Unstoppable.”
🛠️ Tools of the Day
AIDC-AI/Pixelle-Video — Fully automated AI short video engine that turns prompts into finished short-form videos end to end. 8.3K stars.
TEAHEE Moment
Stay sharp, stay informed. See you tomorrow!
If you enjoyed this TEA, follow along on social for more:
Twitter/X





