shmlkv's AI Building Pipeline
Table of content

shmlkv (GitHub) is a fullstack product engineer who builds “beautiful things fast.” His GitHub shows 35 repositories ranging from DNA analysis with Claude to automated video generation pipelines to turning Obsidian into an AI workspace.
The pattern across his projects: chain specialized models together, orchestrate them with simple scripts, ship working tools fast. No frameworks. Just Python glue code connecting APIs.
Multi-model orchestration
Most people use one AI at a time. shmlkv chains them. His ai-slop-pipeline-tiktoker shows the pattern:
- Story generation: Claude or Grok writes the narrative
- Image generation: Flux creates visuals
- Video generation: Veo 3.1 animates the images
- Voice generation: ElevenLabs adds narration
Each model does what it’s best at. The Python script orchestrates the handoffs.
# Install and configure
git clone https://github.com/shmlkv/ai-slop-pipeline-tiktoker
cd ai-slop-pipeline-tiktoker
pip install -r requirements.txt
# Generate a 60-second video
python generate_video.py \
--prompt "explain quantum computing" \
--style educational \
--voice narrator
The pipeline runs models in parallel where possible. Total generation time: 3-6 minutes for a 60-second video. Sequential would take 15-20 minutes.
This is the three-layer workflow applied to content creation: specify (prompt), implement (each AI does its part), verify (watch the output).
Obsidian as AI hub
shmlkv built three plugins that turn Obsidian into an AI workspace:
obsidian-prompt-assistant — custom prompts in the command palette:
{
"prompts": [
{
"name": "Morning Pages",
"prompt": "Help me process these thoughts using CBT techniques"
},
{
"name": "Code Review",
"prompt": "Review this code for security issues and suggest improvements"
}
]
}
Each prompt becomes a command. Select text, run command, AI responds. Works with 100+ models via OpenRouter.
obsidian-companion — autocomplete while writing:
Ghost text appears as you type. Tab to accept. Like Copilot but for notes and journal entries.
obsidian-chat-cbt-plugin — CBT-inspired journaling:
Pre-built prompts for cognitive behavioral therapy patterns. Exposure Ladder, Activity Plan, Habit Builder, Avoidance Check. AI responses get summarized into structured tables.
The pattern: AI assistants where you already work. No separate app. The notes are the interface.
Conversational data analysis
dna-claude-analysis takes raw DNA data (23andMe, AncestryDNA, MyHeritage) and makes it explorable through conversation.
Python scripts analyze 16+ categories: cardiovascular risk, pharmacogenomics, ancestry markers, nutrition, sleep, cognitive traits, athletic performance.
# Process raw DNA file
python analyze_dna.py --file raw_data.txt
# Generates JSON results + HTML dashboard
# Then chat with Claude about the findings
python chat.py "What does my data say about caffeine metabolism?"
Claude interprets the genetic markers in natural language. Ask follow-up questions. Explore patterns. The dashboard visualizes everything in a “DNA Terminal” interface.
This is the same pattern as Simon Willison’s LLM logging: transform complex data into something queryable through conversation.
Feed-to-AI pipelines
telegram-rss-parser-web converts Telegram channels into RSS feeds and JSON.
Why? Because LLMs can’t read Telegram directly. But they can consume RSS and JSON.
# Deploy parser
git clone https://github.com/shmlkv/telegram-rss-parser-web
docker-compose up
# Subscribe to any public Telegram channel
# Get RSS: /feed/channel_name
# Get JSON: /api/channel_name
The pattern: restructure information into AI-friendly formats. Then build tools that consume those formats.
Voice-first interfaces
Two projects show voice pattern:
voice-input-steamdeck — voice input on Steam Deck using Groq API for transcription. Trigger with Numlock.
selftalker — voice journaling web app. Record thoughts, AI transcribes and structures them.
Voice as input, text as output, AI as the bridge. See voice-first content guide for more patterns.
The mini-app generator
tg-vibecoding-platform generates mini-apps inside Telegram.
Describe what you want in text. AI generates the UI. Deploy to Telegram. Built with TON blockchain integration for payments.
This is vibe coding in a group chat: describe, generate, deploy, iterate with friends.
The automation mindset
shmlkv’s most starred repo is polymarket-copy-trading-bot — mirrors trades from top Polymarket traders automatically.
Not AI, but shows the pattern: watch signals, execute actions, no human in the loop. Same mindset as his AI pipelines.
What patterns stick
Chain specialized models — don’t make Claude do everything. Use the best model for each step. Flux for images, Veo for video, ElevenLabs for voice, Claude for reasoning.
Parallel execution — run independent steps at the same time. 5-10x speed improvement.
AI where you work — Obsidian plugins, not separate apps. Voice input on the device you’re already using.
Data-to-dialogue — transform complex datasets into conversational interfaces. DNA, RSS feeds, whatever. Make it queryable through natural language.
Simple orchestration — Python scripts calling APIs. No frameworks. Easy to debug when models change or APIs break.
What I adopted
The multi-model pipeline pattern changed how I think about AI projects. I used to ask Claude to do everything. Now I use specialized models and chain them.
The Obsidian plugins are immediate value. obsidian-prompt-assistant turns any prompt into a reusable command. obsidian-companion gives you autocomplete everywhere.
Start with one pipeline project. Pick something with clear steps: generate text, create image, transform format. Chain the right model for each step. You’ll see why specialized models beat general-purpose for production work.
The code is readable Python. Clone a repo, read the scripts, understand the pattern. Then apply it to your own projects.
Next: Three-Layer Workflow
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.