Café com Dopamina
Automated tech content platform that collects data from GitHub, LinkedIn, and Twitter, then uses AI to generate daily Portuguese-language episode posts for the Brazilian developer community
Role: Creator & Solo Developer
Overview
"What if you could distill the entire week's tech trends into a 5-minute read — every single day, automatically?"
Café com Dopamina is an automated content platform that aggregates data from multiple sources (GitHub trending, LinkedIn engagement, Twitter/X, newsletters), feeds it through an AI-powered pipeline, and publishes daily episode-style blog posts in Brazilian Portuguese.
The platform targets Brazilian developers, tech leads, and cloud engineers who want to stay on top of global tech trends without doom-scrolling multiple feeds.
🎯 Key Objectives
📊 Aggregate trending tech data from 5+ sources daily 🤖 Use LLMs to generate high-quality, opinionated content in Portuguese 🔄 Fully automated pipeline — data collection → content generation → PR → publish 🌐 SEO-first Next.js site with server-rendered pages and structured data ☕ Authentic voice — like talking to a friend over coffee
🏗️ Technical Architecture
┌─────────────────────────────────────────────────┐
│ DATA COLLECTION │
│ │
│ GitHub API ─→ trending repos & devs │
│ LinkedIn ─→ high-engagement posts │
│ Twitter/X ─→ trending discussions │
│ RSS/HN ─→ newsletter highlights │
└──────────────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ CONTENT GENERATION │
│ │
│ build_episode_digest.py → structured JSON │
│ generate_episode.py → LLM (GitHub Models) │
│ episode_prompt.txt → Portuguese voice │
└──────────────────────┬──────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ PUBLISHING │
│ │
│ GitHub Action opens PR in blog repo │
│ Human review → merge → Vercel auto-deploy │
│ Cross-post to portfolio blog │
└─────────────────────────────────────────────────┘
💻 Data Pipeline
# Daily at 14:00 UTC — GitHub Action
# 1. Collect last 24h of data from all sources
python scripts/build_episode_digest.py --days 1
# 2. Generate episode via GitHub Models API (GPT-4o)
python scripts/generate_episode.py
# 3. Push to blog repo + open PR for review
# 4. Cross-post adapted version to portfolio blog
🔑 Technical Highlights
- GitHub Models API — Uses
GITHUB_TOKENfor LLM calls, zero extra secrets - Cross-repo automation — Single workflow pushes content to 2 separate repos
- Anti-detection scraping — LinkedIn data collected via Playwright with full stealth stack
- SEO-first — Server-rendered, JSON-LD structured data, dynamic OG images via Satori
- Markdown-driven — Episodes are plain
.mdfiles with YAML frontmatter, parsed at build time - Human-in-the-loop — AI generates, but every episode is reviewed before publish
📈 Impact
- 70+ data sources monitored daily (GitHub, LinkedIn, RSS, Reddit, HN)
- Episodes generated in ~30 seconds end-to-end
- Zero manual data collection — fully automated pipeline
- Content published in Brazilian Portuguese for an underserved audience