Ten days in. The messy, improvised production process we've been inventing as we go is starting to look like an actual pipeline.
Not a polished one. Not the kind you'd diagram on a whiteboard for investors. But a pipeline where each step produces something the next step can use, and the whole thing moves in one direction: toward a finished movie.
The Pipeline, As It Actually Exists
Here's what we've built, step by step, over the last ten days:
- Screenplay → Scene breakdowns with shot descriptions and character notes
- Character design → Approved turnaround sheets for all seven characters
- Storyboards → AI-generated panels using character refs for consistency
- 3D models → First-pass geometry for all characters via TRELLIS, Hunyuan3D, and Meshy
- PBR textures → Proper material maps (diffuse, roughness, normal) via Meshy API
Each of these steps was a separate problem that we solved independently. But now they connect. The turnaround that anchors a character's 2D consistency is the same image that generates their 3D model. The storyboard panels that show the scene composition will eventually guide camera placement in 3D scenes built from these models.
PBR Textures: Making Models Look Real
Yesterday's 3D models had basic color but no material properties. Today, Gabe and Nina got proper PBR (Physically Based Rendering) textures through the Meshy API. That means their surfaces now respond to light the way real materials do. Gabe's glasses reflect. Nina's dress has the right sheen. Skin looks like skin, not painted plastic.
This matters because when we eventually render these characters in scenes with actual lighting, the difference between "has a texture" and "has PBR materials" is the difference between looking like a video game from 2005 and looking like a modern animated film. We're not at Pixar quality. But we're on the right road.
The Leo Problem, Continued
Leo's 3D model got a second pass through the Meshy pipeline today. The flat-Leo issue from yesterday was addressed by feeding Meshy not just the turnaround sheet but additional reference angles extracted from storyboard panels. More input angles means more information about depth, and the result is a model that actually looks like a kid instead of a cardboard cutout of one.
We also built a proper Meshy API integration script, so future characters can go through the same pipeline without manual setup each time. The script handles image upload, job creation, polling for completion, and downloading the result. One command, one model. That's the kind of automation that pays for itself immediately.
Website Fixes
A boring but necessary fix today: the Act 1 storyboard pages had a trailing slash bug in their image paths. Every image URL ended with an extra / that made R2 return a 404. The storyboard pages looked empty even though all the images were uploaded correctly.
It was the kind of bug that makes you feel dumb when you find it. Hours of "why aren't the images loading?" answered by one extra character in a URL template. Fixed it, force-refreshed the cache, and now Act 1 storyboards actually display on the website. Visitors can click through all ten scenes and see the story unfold panel by panel.
Ten Days of Lessons
We're a third of the way through what I loosely think of as "pre-production month." Here's what I know now that I didn't know on day one:
- Character consistency is the hardest problem. Not generating individual good images—that's almost easy now. Making the same character look the same across hundreds of panels and multiple 3D tools is the real fight.
- AI agents can do production work, but they need structure. Small, focused tasks with clear acceptance criteria. Big vague tasks produce big vague results.
- Multiple AI tools beat any single one. TRELLIS for initial 3D geometry. Meshy for textures and problem cases. Hunyuan3D for unusual characters. Gemini for 2D generation. Each tool has a sweet spot.
- The boring infrastructure matters most. Asset storage, automated manifests, consistent URL patterns, working navigation. The exciting stuff (character design, 3D models) can't ship without the boring stuff working.
- Sleep is a feature, not a bug. The overnight autonomous sessions are genuinely productive. Setting up work for agents before bed and reviewing results in the morning is a real workflow now.
What's Next
Rigging. The 3D models need internal skeletons so they can be posed and animated. This is traditionally one of the most tedious parts of 3D character production, and it's where AI tools are least mature. We'll see how far automated rigging gets us and where we need manual work.
Also on deck: extending the storyboard consistency pass to Acts 2 and 3, starting environment modeling for key locations (the family home, the magic minivan, the Jurassic swamp), and getting the Instagram account actually posting.
The pipeline is real. Now we feed it.