Daily TEA: Ethereum Powers Finance, AI Models Evolve
Ethereum, AI Music, Fertility Tech, Open-Source AI
Hello, dear TEA-mates, this is what you need to know today.
1. 🤖 LLMs Show Varied Cooperation in Game Theory Experiments
Three large language models (LLMs)—Llama2, Llama3, and GPT3.5—were tested in the Iterated Prisoner’s Dilemma to assess their cooperative behavior as social agents. Played against random adversaries with varying hostility over 100 rounds, the models displayed distinct tendencies. Llama2 and GPT3.5 were more cooperative than typical humans, forgiving opponent defections below 30%, while Llama3 was less cooperative, exploiting unless the opponent always cooperated. The study introduces a methodology to evaluate LLMs’ rule comprehension and decision-making, advancing AI auditing practices. Read More: arXiv
TEA For Thought ☕: Curious to see why these models possess these characteristics as they were. Is it the data? Or the algorithm?
2. 💸 Major Companies Leverage Ethereum for Financial Infrastructure
At EthCC in Cannes, Ethereum’s decade of uptime and security was showcased as a foundation for institutional finance. Robinhood tokenized stocks on Arbitrum, Deutsche Bank developed a zkSync-based tokenization platform, and Coinbase and Kraken filed for onchain equities. Companies like BitMine and Bit Digital shifted treasury reserves to ETH, while ether ETFs saw inflows and stablecoin settlements dominated on Ethereum, driving a 6% weekly ETH price increase. Read More: CNBC
TEA For Thought ☕: Ethereum’s decade-long uptime, security, and neutrality are now its core selling points for institutions seeking programmable, reliable finance infrastructure.
3. 🎵 AI Band Velvet Sundown’s Viral Success Sparks Debate
The Velvet Sundown, an AI music project with over 900,000 monthly Spotify listeners, confirmed in a revised bio that their music is AI-generated, guided by human creative direction. Emerging in June, the “band” gained viral fame through Spotify playlists, boosted by paid placements and algorithmic recommendations. A hoaxer, Andrew Frelon, falsely posed as their spokesperson, highlighting media vulnerabilities. The project challenges notions of authorship and creativity in AI-driven music. Read More: Rolling Stone
TEA For Thought ☕: The humans behind the agent might also be the key. After all, AI can create music but it doesn’t have taste. Creativity, taste, innovation—human capabilities that are so unique and can’t be taken away.
4. 🩺 AI Tool Boosts Pregnancy Outcomes by Analyzing Sperm
A new AI tool enhances male infertility diagnosis by analyzing sperm motility and morphology with high precision, aiding clinicians in improving pregnancy outcomes. By providing faster and more accurate assessments, the technology supports couples facing fertility challenges, offering hope for better reproductive success rates through targeted treatments. Read More: CNN
TEA For Thought ☕: One of the use cases that makes me super hopeful.
5. 🌐 American DeepSeek Project Targets Open-Source AI Dominance
The American DeepSeek Project, led by Nathan Lambert, aims to build a fully open-source AI model rivaling DeepSeek V3 within two years. With transparent data, code, and logs, it seeks to counter proprietary U.S. models and potentially untrustworthy CCP models, addressing concerns about vulnerabilities and fostering a competitive AI ecosystem to maintain Western leadership. Read More: Interconnects
TEA For Thought ☕: Practically speaking, there will never be proof that CCP models cannot leave vulnerabilities in code or execute tools in malicious ways, even though it’s very unlikely in the near future. CCP models will over time shift to support a competitive software ecosystem that weakens many of America and the West’s strongest companies due to in-place compute restrictions. Great points made. If the U.S. doesn’t act now, the AI future could be dominated by closed U.S. models or less trustworthy CCP ones.
Prompt Tip: Prompt Versioning System
1. Hierarchical Folder Structure
Prompts/
├── Work/
│ ├── Code-Review/
│ ├── Documentation/
│ └── Planning/
├── Personal/
│ ├── Research/
│ ├── Writing/
│ └── Learning/
└── Templates/
├── Base-Structures/
└── Modifiers/
2. Naming Convention That Actually Works
Format: [UseCase]_[Version]_[Date]_[Performance].md
Examples:
CodeReview_v3_12-15-2025_excellent.mdBlogOutline_v1_12-10-2024_needs-work.mdDataAnalysis_v2_12-08-2024_good.md
3. Template Header for Every Prompt
# [Prompt Title]
**Version:** 3.2
**Created:** 12-15-2025
**Use Case:** Code review assistance
**Performance:** Excellent (95% helpful responses)
**Context:** Works best with Python/JS, struggles with Go
## Prompt:
[actual prompt content]
## Sample Input:
[example of what I feed it]
## Expected Output:
[what I expect back]
## Notes:
- Version 3.1 was too verbose
- Added "be concise" in v3.2
- Next: Test with different code languages
4. Performance Tracking
I rate each prompt version:
Excellent: 90%+ useful responses
Good: 70-89% useful
Needs Work: <70% useful
5. The Game Changer: Search Tags
I love me some hash tags! At the bottom of each prompt file: Tags: #code-review #python #concise #technical #work
Results after 3 months:
Cut prompt creation time by 60% (building on previous versions)
Stopped recreating the same prompts over and over
Can actually find and reuse my best prompts
Built a library of 200+ categorized, tested prompts
TEAHEE Moment
Stay sharp, stay informed. See you tomorrow.
Follow us on Twitter/X: The Era Arc




