What’s your current process when working on design projects that incorporate AI tools?
Every project starts with the same foundation: I map the brief to the toolchain. What exactly do I need? Image generation? Motion? Sound? VO? Compositing? Once I have that mapped out, I lock the visual language – brand colors, lighting mood, and what I call “camera grammar.”
If the project includes a real-world location or product hero, I generate the keyframes first: the hero shots, close-ups, and the most expressive angles. These are the anchor points, and everything else is built around them.
Prompts aren’t just lines of text – they’re mini director notes. I include lens type, movement direction, lighting temperature, wardrobe cues, product placement… everything. Each AI model has a different strength, so I assign tasks accordingly. Once generation is done, I treat the assets like live-action footage: I color-grade, match cuts, drop in SFX and VO, QC for brand fidelity – and then deliver.
One campaign I’m proud of is House’s EOFY video. I generated cookware frames that looked exactly like their products, right down to the material finish. The video felt branded and premium, not like AI placeholders.
Are there any specific tools or platforms you rely on for AI-assisted design?
Absolutely – and the list evolves every month. Keeping up is part of the job.
For images, I rely on:
- Google Imagen 4 – the most consistent for photorealism
- Flux – great for fast reference and styleframes
For video, my go-tos are:
- Google Flow / VEO 3 – handles text-to-video and even includes sound and VO
- Higgsfield – excellent for controllable motion and keeping character consistency
- Kling 2.1 Master – incredibly sharp fidelity, but it doesn’t do audio, so I handle VO and scoring separately
It’s not about finding one perfect tool. It’s about knowing which tool solves what problem, and building a modular, efficient pipeline from there.
How do you maintain brand consistency and creative integrity when using generative tools?
You don’t hope for consistency – you enforce it.
I embed brand DNA directly into the prompts: color palettes, typography cues, emotional tone, product names. I also maintain a prompt library for each client so I can replicate brand fidelity across campaigns.
Today, continuity is still one of AI’s weakest spots. That means your prompts have to be painfully specific – like explaining your shot to a five-year-old. I use real camera language in my prompts: “35mm lens, low-angle dolly-in, tungsten practicals.” That kind of specificity gives the model the context it needs.
And of course, I QC every single shot. Color, shape, geometry, logo placement, emotional tone – it all has to align. If I wouldn’t approve it on a real set, I won’t approve it here.