ShyFaceTime
Go to projectA video call where you can only see the other person when you're not looking at the screen.
Built to explore presence, vulnerability, and the tension of eye contact in digital space.
Overview
What if looking at someone made them disappear?
ShyFaceTime is a WebRTC video call app that uses real-time gaze detection to blur the other person's video when you look directly at the screen. The only way to see them clearly is to look away.
The concept plays with how eye contact works in real life — how we look away when things feel too intimate, how we steal glances, how presence requires a kind of vulnerability. This app takes that dynamic and makes it the entire interface.
Built with Node.js, Socket.IO, WebRTC, and ml5.js FaceMesh for real-time iris and head tracking. Deployed live at shyfacetime.live.
The Experience
From landing page to call — every screen is designed around the blur
The app has a full shell with multiple views, each reinforcing the theme of shyness and indirect attention. A blur lens follows your cursor across every page, and all interactive elements blur on tap.
Land on the intro
Spline 3D background, blur lens active
The landing page introduces the concept with a short story. A Spline 3D scene plays behind the content, and a blur lens follows your mouse — setting the tone before you've even signed in.
Spline + backdrop-filter lensSign in with Google
Firebase Authentication
One-tap Google sign-in via Firebase Auth. The transition is instant — no loading screens, no delays. You're in the waiting room immediately.
Firebase Auth (Google)Browse the waiting room
Nearby and familiar faces
The waiting room shows other online users as cards with their vibe status and time elapsed. Two tabs — nearby (strangers) and familiar faces (people you've called before). Tap a card to initiate a call.
Socket.IO presence + SQLiteConnecting animation
Glass circles with blur-breathe effect
While the WebRTC peer connection is established, both users see circles with their names, pulsing with a blur-breathe animation. It sets the emotional tone for the call.
WebRTC / SimplePeerThe call — see only when you look away
Gaze-driven blur on both sides
Two video feeds sit side by side. Your partner's video blurs based on your gaze — look at the screen and they disappear; look away and they come into focus. Meanwhile, your video blurs based on their gaze. Both of you are always negotiating attention.
ml5.js FaceMesh + canvas blurDesign Decisions
Why it looks and feels this way
Every design choice reinforces the theme — shyness, indirectness, and the beauty of not-quite-seeing.
Why does a blur lens follow your cursor?
The cursor-following blur introduces the core mechanic before you even enter a call. It trains you to notice how your attention changes what you see, making the gaze-driven video blur feel like a natural extension.
Why the warm, muted aesthetic?
The off-white (#F6F4EF) and sage green (#8FB59A) are inspired by Rinko Kawauchi's photography — soft, intimate, and slightly veiled. A clinical interface would undermine the emotional core of the project.
Why track familiar faces separately?
Calling a stranger once is an experiment. Calling someone again means something. The familiar faces tab and call history (Echoes) give weight to repeated connections, turning the app from a novelty into something with memory.
Why does every interaction blur?
Hover a button, click a link, transition between pages — everything blurs. It makes the blur feel like the material of the interface, not just a gimmick on the call screen. By the time you're in a video call, the blur already feels like second nature.
Why are the camera windows so small?
Most video call apps fill the screen with your face. But ShyFaceTime users are shy — a big window of yourself staring back feels confrontational. Keeping the video feeds small (38% width) makes the experience feel less exposed and more like glancing than staring.
Why the flowy background animation?
A static background would make the app feel rigid and utilitarian. The slow, organic Spline animation behind the interface gives the space a living, breathing quality — like the app itself is shy too, shifting and moving while you're not paying attention.
How the Gaze Works
Iris tracking, head pose, and smoothed blur — all in the browser
The gaze detection runs entirely client-side using ml5.js FaceMesh. It tracks 478 facial landmarks per frame, combining iris position (85% weight) and head orientation (15% weight) to calculate a look score with an iris multiplier of 8.
When no face is detected, the video defaults to full blur. The raw gaze data is smoothed with a lerp factor of 0.08 before being sent to the partner via Socket.IO every 50ms. On the receiving end, the blur is applied to the canvas (desktop) or as a CSS filter fallback (mobile).
A watchdog monitors the FaceMesh detector and auto-restarts it if it gets stuck — a common issue with ml5's detectStart in long-running sessions. The system sends the smoothed blur amount (not raw scores) to ensure the partner's preview of your video feels natural and continuous.
Reflection
Designing around a contradiction
The core of ShyFaceTime is a paradox: you want to see someone, but looking makes them disappear. That tension is the whole point. It forces you to be present in a way that normal video calls never do — you can't multitask, you can't zone out, because your attention is literally the interface.
Building this taught me how much of "presence" in digital communication is performative. We stare at screens pretending to make eye contact. This app strips that pretense away and replaces it with something more honest — a negotiation between wanting to see and being willing to look away.