Ever wondered how random video chat apps actually work? I built Omiro - a modern, open-source Omegle alternative using Go, WebRTC, and Redis. Here's the honest story of building a real-time video chat platform from scratch, including all the "oh crap" moments.
Look, I'm not gonna lie - I was bored one weekend and thought, "How hard can it be to build one of those random video chat things?"
Spoiler alert: It's harder than it looks. But also way more fun than I expected.
So I set out to build Omiro - a random video chat app where strangers can connect, chat, and hit "Next" if things get awkward (let's be real, they always do). Think Omegle, but modern, open-source, and actually maintainable.
I went with Go for the backend because:
I used Echo as the web framework because it's lightweight and doesn't try to do too much. Perfect for WebSocket handling.
Redis does the heavy lifting here:
Redis is basically the glue that holds everything together. Could I have done this without it? Sure. Would I want to? Hell no.
WebRTC is amazing and terrible at the same time.
The good: Once it's working, you get peer-to-peer video with no media server needed. Your server just does the matchmaking - the actual video goes directly between users.
The bad:
I spent an entire evening debugging why both users thought they were the "callee" and neither would initiate the connection. Turns out, trusting string comparison as a fallback was a terrible idea.
The app is fully containerized and available on GitHub Container Registry:
docker run -d \
-p 8080:8080 \
-e REDIS_HOST=redis \
-e REDIS_PORT=6379 \
ghcr.io/r0ld3x/omiro:latest
The Docker image went from 50MB (Alpine-based) to 9.8MB using a distroless base image. That's 80% smaller! No shell, no package manager, just the binary and what it needs to run.
Here's how it actually works:
User A User B
│ │
├─────── WebSocket ───────┤
│ │
└────► Go Backend ◄───────┘
│
├─ Redis (Queue + Pub/Sub)
│
└─ WebRTC Signaling
│
└─ Direct P2P Video/Audio
Simple, right? Took me way longer than it should have.
Problem: Connections were dying after 60 seconds.
Solution: Implement ping/pong keepalive. Server pings every 30 seconds, browser responds automatically. Easy fix once I figured it out.
ticker := time.NewTicker(30 * time.Second)
case <-ticker.C:
conn.WriteMessage(websocket.PingMessage, nil)
Problem: After clicking "Next", users could still send messages to their old partner.
Solution: Proper mutex locking and clearing partner references immediately. Race conditions are sneaky.
Problem: Sometimes neither user would initiate the WebRTC connection.
Solution: Server explicitly tells one user should_call: true and the other should_call: false. No more "you go first" standoffs.
Problem: ICE candidates showing up before remote description is set = errors everywhere.
Solution: Queue the ICE candidates and process them after setting the remote description. Simple but effective.
I could've slapped together a basic HTML page, but where's the fun in that?
The UI features:
And yes, I mirrored the local video so you see yourself like in a mirror. Because that's what users expect, even if they don't realize it.
Per-IP limits on WebSocket connections. Default: 60 connections per minute. Adjustable.
HMAC-SHA256 signed tokens with timestamps. Format: uuid:timestamp:signature
Can't be forged. Can't be replayed after expiration.
Redis-backed with TTL. Ban duration is configurable. Admin can ban/unban via API (when I build that part).
WebSocket connections only from allowed origins. No CSRF for you.
With proper configuration:
The P2P nature of WebRTC means video doesn't touch my servers at all. I'm just doing matchmaking and signaling.
# Clone and run
git clone https://github.com/r0ld3x/omiro.git
cd omiro
docker-compose up
Done. Redis + App + Network all configured.
# Just pull and run
docker pull ghcr.io/r0ld3x/omiro:latest
docker run -d -p 8080:8080 -e REDIS_HOST=redis omiro:latest
GitHub Actions automatically builds multi-platform images (amd64 + arm64) and pushes to GitHub Container Registry. CI/CD done right.
But not impossible. The key is understanding the flow: offer → answer → ICE candidates. In that order. Always.
Everyone talks about Redis as a cache. It's so much more. Pub/sub alone makes multi-server setups trivial.
Handling thousands of WebSocket connections concurrently without thinking about thread pools or async/await? Yes please.
From "works on my machine" to "works everywhere" in one Dockerfile.
9.8MB vs 50MB. No shell. No package manager. Just the binary. Secure and tiny.
The entire thing is open source and ready to run:
# Quick start
docker run -d -p 8080:8080 ghcr.io/r0ld3x/omiro:latest
# Open browser
http://localhost:8080
Or check out the code: github.com/r0ld3x/omiro
Things I might add:
But honestly? It works pretty well as-is. Sometimes done is better than perfect.
Should you build your own random video chat app?
Maybe! If you want to learn:
It's a fun project that touches a lot of modern web technologies.
Should you use this in production?
I mean... you could. It's solid. Has security features. Scales horizontally.
But you'll want to:
Random video chat apps attract... interesting people. You've been warned.
Building Omiro taught me more about real-time communication than any tutorial ever could. The "oh shit" moments, the late-night debugging sessions, the satisfaction when WebRTC finally connected - it was all worth it.
Is it perfect? No. But it works, it's fast, and it's open source. You can run it in 30 seconds with Docker, or dive into the code and make it your own.
And honestly? That's pretty cool.
Built with: Go, WebRTC, Redis, Echo Framework, Docker, and way too much coffee ☕
Try it: github.com/r0ld3x/omiro
Docker: docker pull ghcr.io/r0ld3x/omiro:latest
License: MIT (do whatever you want with it)
P.S. - If you build something cool with this, let me know. Or don't. I'm not your boss.
P.P.S. - Yes, the local video is mirrored. That's intentional. Trust me.