face_looker_example.mov
Generate a grid of face images with different gaze directions using AI, then use them to create an interactive face that follows the user's cursor in real-time.
This project has two parts:
- Generate Face Images - Create hundreds of face images looking in different directions using either:
- Replicate Model (easiest) - Upload your image and get everything you need in one click
- Python Script (advanced) - More control over generation parameters
- React Hook - Makes those images interactive by displaying the right image based on cursor position
Perfect for creating engaging portfolio headers, interactive avatars, or fun UI elements!
- A 512Γ512 photo of a face (your photo or any portrait)
- Node.js (for React implementation)
- Replicate API account (free tier available)
You have two options to generate the face images:
The easiest way to get started is using the dedicated Replicate model created in collaboration with fofr. Simply upload your face image and it will automatically generate:
- β All the face images looking in different directions
- β A sprite/grid sheet that combines all faces into a single image
- β A ZIP file containing vanilla HTML, JS and CSS to render the effect
- β A preview video showing how the animation will look
To use it:
- Visit https://replicate.com/kylan02/face-looker
- Upload your 512Γ512 face image
- Download the generated ZIP file
- Extract and use the images/code in your project
The model automatically uses the Expression Editor model to generate all the images you need. See the model page for more details.
If you prefer more control over the generation process or want to customize the parameters, you can use the Python script described below.
# Clone or download this repository
cd face_looker
# Create a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt- Sign up at replicate.com
- Go to your account settings
- Copy your API token
- Set it as an environment variable:
export REPLICATE_API_TOKEN=your_token_hereOr add it to your .bashrc/.zshrc for permanent use:
echo 'export REPLICATE_API_TOKEN=your_token_here' >> ~/.zshrc
source ~/.zshrcImage Requirements:
- Use a 512Γ512 pixel image for best results
- Face should be centered and looking straight ahead
- Good lighting, clear features
- Neutral expression works best
Where to put your image:
-
Any filename works! You can use any name like
me.jpg,portrait.png,selfie.jpeg, etc. -
Any location works! You can put it in the project folder or anywhere on your computer
-
Example file structure:
face_looker/ βββ main.py βββ my_face.jpg β Your image can go here βββ requirements.txt βββ README.mdOr anywhere else:
/Users/you/Pictures/portrait.png /path/to/any/folder/selfie.jpg
Before running the script, make sure you have:
- β
Set your
REPLICATE_API_TOKENenvironment variable - β Activated your virtual environment (if using one)
- β
Installed dependencies with
pip install -r requirements.txt - β Your face image is in the project directory
π How many images will be created?
The default settings create 121 images (11Γ11 grid), but you can customize this:
| Step Size | Grid Size | Total Images | Generation Time | Smoothness |
|---|---|---|---|---|
--step 5 |
7Γ7 | 49 images | ~2-3 minutes | Basic |
--step 3 (default) |
11Γ11 | 121 images | ~5-8 minutes | Good |
--step 2.5 |
13Γ13 | 169 images | ~8-12 minutes | Smooth |
--step 2 |
16Γ16 | 256 images | ~12-18 minutes | Very smooth |
Basic usage:
# Any filename works!
python main.py --image ./my_face.jpg --out ./out
python main.py --image ./portrait.png --out ./out
python main.py --image /Users/you/Pictures/selfie.jpg --out ./outCustom grid density:
# Smoother transitions (more images, longer generation time)
python main.py --image ./my_face.jpg --out ./out --step 2.5
# Faster generation (fewer images, less smooth)
python main.py --image ./my_face.jpg --out ./out --step 5Full options:
python main.py \
--image ./my_face.jpg \ # Input image path or URL
--out ./out \ # Output directory
--min -15 \ # Minimum gaze value
--max 15 \ # Maximum gaze value
--step 3 \ # Step size (smaller = more images)
--size 256 \ # Output image size (256x256)
--skip-existing # Skip already generated images| Parameter | Default | Description |
|---|---|---|
--min |
-15 | Minimum pupil position (left/up) |
--max |
15 | Maximum pupil position (right/down) |
--step |
3 | Grid spacing (smaller = smoother, more images) |
--size |
256 | Output image dimensions (256Γ256) |
Image count formula: ((max - min) / step + 1)Β²
Examples:
step=3(default): 121 images (11Γ11 grid)step=2.5: 169 images (13Γ13 grid)step=5: 49 images (7Γ7 grid)
The script generates:
- Images:
gaze_px{X}_py{Y}_256.webpfiles- Example:
gaze_px0_py0_256.webp(looking at center) - Example:
gaze_px15_py0_256.webp(looking right) - Example:
gaze_px0_pym15_256.webp(looking up)
- Example:
- CSV Index:
index.csvmapping filenames to coordinates
out/
βββ gaze_px-15_py-15_256.webp
βββ gaze_px-15_py-12_256.webp
βββ ...
βββ gaze_px15_py15_256.webp
βββ index.csv
- Copy the generated faces to your public folder:
# Copy all face images to your React public folder
cp -r ./out/faces /path/to/your-react-app/public/faces- Copy the React files to your project:
# Copy the hook and component
cp useGazeTracking.js /path/to/your-react-app/src/hooks/
cp FaceTracker.jsx /path/to/your-react-app/src/components/
cp FaceTracker.css /path/to/your-react-app/src/components/import FaceTracker from './components/FaceTracker';
function App() {
return (
<div className="App">
<h1>My Portfolio</h1>
{/* Basic usage */}
<FaceTracker />
{/* With custom styling */}
<FaceTracker
className="my-custom-class"
basePath="/faces/"
/>
</div>
);
}import FaceTracker from './components/FaceTracker';
function Header() {
return (
<header style={{
height: '400px',
display: 'flex',
alignItems: 'center',
justifyContent: 'center'
}}>
<div style={{ width: '300px', height: '300px' }}>
<FaceTracker
basePath="/faces/"
showDebug={process.env.NODE_ENV === 'development'}
/>
</div>
</header>
);
}For more control, use the useGazeTracking hook directly:
import { useRef, useEffect } from 'react';
import { useGazeTracking } from './hooks/useGazeTracking';
function CustomFaceComponent() {
const containerRef = useRef(null);
const { currentImage, isLoading, error } = useGazeTracking(
containerRef,
'/faces/'
);
return (
<div
ref={containerRef}
style={{ width: '400px', height: '400px', position: 'relative' }}
>
{currentImage && (
<img
src={currentImage}
alt="Following face"
style={{
width: '100%',
height: '100%',
objectFit: 'cover',
borderRadius: '50%' // Make it circular!
}}
/>
)}
{isLoading && <p>Loading...</p>}
{error && <p>Error: {error.message}</p>}
</div>
);
}If you change the generation parameters, update these constants in useGazeTracking.js:
// Must match your generation parameters!
const P_MIN = -15; // Same as --min
const P_MAX = 15; // Same as --max
const STEP = 3; // Same as --step
const SIZE = 256; // Same as --size<FaceTracker basePath="/assets/my-face/" />/* FaceTracker.css */
.face-tracker {
width: 100%;
height: 100%;
position: relative;
overflow: hidden;
background: #f0f0f0;
border-radius: 50%; /* Circular face */
box-shadow: 0 10px 30px rgba(0,0,0,0.2);
}
.face-image {
user-select: none;
pointer-events: none;
}
.face-debug {
position: absolute;
top: 10px;
left: 10px;
background: rgba(0, 0, 0, 0.7);
color: white;
padding: 10px;
border-radius: 5px;
font-family: monospace;
font-size: 12px;
}- Preload images (optional):
useEffect(() => {
// Preload all face images
const images = [];
for (let py = -15; py <= 15; py += 3) {
for (let px = -15; px <= 15; px += 3) {
const img = new Image();
img.src = `/faces/gaze_px${px}_py${py}_256.webp`;
images.push(img);
}
}
}, []);- Use fewer images - Generate with larger
--stepvalue (e.g.,--step 5)
The component automatically supports touch events! The face will follow finger movement on mobile devices.
Check your API token:
echo $REPLICATE_API_TOKENVerify Replicate credits:
- Check your account at replicate.com/account
- Free tier includes some credits, but may require payment for large batches
Resume interrupted generation:
python main.py --image ./my_face.jpg --out ./out --skip-existing-
Verify images are in the correct location:
- Check
public/faces/directory exists - Confirm images have correct naming pattern
- Check
-
Check browser console for errors:
- Missing images will show 404 errors
- Path issues will show in network tab
-
Verify configuration matches:
P_MIN,P_MAX,STEPinuseGazeTracking.jsmust match generation parameters
- Generate fewer images (use
--step 5or higher) - Reduce image size (use
--size 128) - Preload images (see Performance Optimization above)
Replicate charges per second of GPU time, it should only be a couple of cents.
Check current pricing at replicate.com/pricing
- Portfolio headers - Make your about page more engaging
- Interactive avatars - Add personality to chat interfaces
- Product demos - Draw attention to important elements
- Educational content - Create attention-grabbing tutorials
- Games - Use as character faces or NPCs
See the examples folder for:
- Full Next.js implementation
- TypeScript version
- Circular face mask
- Multiple faces on one page
Contributions welcome! Feel free to:
- Add new features
- Improve documentation
- Share your implementations
- Report bugs
MIT License - feel free to use in personal and commercial projects!
- Face generation powered by Replicate
- Uses fofr/expression-editor model
- Created with β€οΈ by Kylan
- Face Looker Replicate Model - Generate all images with one click
- Replicate API Docs
- Expression Editor Model
- Live Demo
Questions? Open an issue or contact @kylancodes on X
Like this project? Give it a β on GitHub!