Stay ahead of the game

Sign up for our newsletter to receive the latest digital marketing strategies and insights for the month ahead, delivered straight to your inbox!

    Motion Design in the Age of AI: What Still Needs a Human Touch?

    AdcoreAIAI ToolsCreative

    AI didn’t replace my job as a motion designer  –  it gave me a superpower.

    Today, I’m able to generate high-quality visuals, storyboards, and even sound design in a fraction of the time it used to take. But that speed only matters if the final product still feels intentional, on-brand, and emotionally sharp. That’s where the human touch still matters. A lot.

    What’s your current process when working on design projects that incorporate AI tools?

    Every project starts with the same foundation: I map the brief to the toolchain. What exactly do I need? Image generation? Motion? Sound? VO? Compositing? Once I have that mapped out, I lock the visual language – brand colors, lighting mood, and what I call “camera grammar.”

    If the project includes a real-world location or product hero, I generate the keyframes first: the hero shots, close-ups, and the most expressive angles. These are the anchor points, and everything else is built around them.

    Prompts aren’t just lines of text – they’re mini director notes. I include lens type, movement direction, lighting temperature, wardrobe cues, product placement… everything. Each AI model has a different strength, so I assign tasks accordingly. Once generation is done, I treat the assets like live-action footage: I color-grade, match cuts, drop in SFX and VO, QC for brand fidelity – and then deliver.

    One campaign I’m proud of is House’s EOFY video. I generated cookware frames that looked exactly like their products, right down to the material finish. The video felt branded and premium, not like AI placeholders.

     

    Are there any specific tools or platforms you rely on for AI-assisted design?

    Absolutely – and the list evolves every month. Keeping up is part of the job.

    For images, I rely on:

    • Google Imagen 4  –  the most consistent for photorealism
    • Flux  –  great for fast reference and styleframes

    For video, my go-tos are:

    • Google Flow / VEO 3  –  handles text-to-video and even includes sound and VO
    • Higgsfield  –  excellent for controllable motion and keeping character consistency
    • Kling 2.1 Master  –  incredibly sharp fidelity, but it doesn’t do audio, so I handle VO and scoring separately

    It’s not about finding one perfect tool. It’s about knowing which tool solves what problem, and building a modular, efficient pipeline from there.

     

    How do you maintain brand consistency and creative integrity when using generative tools?

    You don’t hope for consistency – you enforce it.

    I embed brand DNA directly into the prompts: color palettes, typography cues, emotional tone, product names. I also maintain a prompt library for each client so I can replicate brand fidelity across campaigns.

    Today, continuity is still one of AI’s weakest spots. That means your prompts have to be painfully specific – like explaining your shot to a five-year-old. I use real camera language in my prompts: “35mm lens, low-angle dolly-in, tungsten practicals.” That kind of specificity gives the model the context it needs.

    And of course, I QC every single shot. Color, shape, geometry, logo placement, emotional tone – it all has to align. If I wouldn’t approve it on a real set, I won’t approve it here.

     

    Are there any tasks you now completely offload to AI – and ones you never would?

    Yes – and the line between human and AI tasks is getting clearer every day.

    I fully offload:

    • B‑roll style inserts
    • Atmospheric cutaways
    • Quick previs/storyboards
    • Background extras
    • Cleanup passes
    • “What-if” style explorations

    I never offload (and maybe never will):

    • Seamless motion graphics
    • Tight data-driven UI
    • Precise product hero shots
    • Musical timing
    • Intentional imperfections
    • Human creativity

    AI can generate options. I make the calls. It accelerates – but I direct. The original idea is always human.

    Has working with AI changed how you approach ideation or storyboarding?

    Of course yes.

    The bottleneck used to be visualization. Now, I can show clients the vibe in a few hours instead of days. That means we focus on the idea, not whether they can picture it in their head. Plus, it speeds up iteration – bad ideas get killed faster, good ones evolve quicker.

    But AI is still just a tool. You have to learn how to use it, or it’ll end up using you.

    What’s the biggest mistake designers make when incorporating AI into their workflows?

    Treating AI like a gimmick instead of footage.

    Too many people drop in “cool” AI shots that don’t match the sequence or the brand. You have to ask: if this was from a real shoot, would I approve it? If not, why is it okay just because it’s AI?

    And then there’s the issue of lazy prompting. If you’re not briefing the model like a Director of Photographer (DP) and an art director, you’re going to get generic sludge. You can’t half-ass it and expect magic.

    What can AI do really well when it comes to motion design – and where does it still fall short?

    Right now, AI is crushing speed and variety. Look development, visual gap-filling, background shots, rapid iterations – it’s a machine at brute force.

    Where it stumbles:

    • Narrative intent. It doesn’t understand why a shot exists.
    • Scene-to-scene continuity. Same character, same outfit, same lighting? Still very hit-or-miss.
    • Taste. It doesn’t know when not to use the glossy version. Sometimes the best choice has a little grit or imperfection.

    We’re heading toward near-perfect visual fidelity. But without a human’s POV, everything starts to look the same. If you don’t have a creative compass, your work just blends into the AI noise.

    Do you think AI has made you a better designer? If so, how?

    Definitely. Because I stopped fearing it.

    AI helps me visualize scenes and storyboards I could never shoot – or even find in stock libraries. It pushes me to clarify my ideas earlier. And most importantly, it frees up my time so I can focus on the stuff that really matters: timing, emotion, precision and creativity.

    It’s less grunt work and more brain work. And that’s the real win.

    Bonus Quick-Fire Section

    1. Q. Favorite AI tool right now?

      A. Google Flow (VEO 3 – Imagen 4)

    2. Q. Biggest myth about AI in design?

      A. “That it will replace us.” It won’t – unless you let yourself be generic. AI just raised the standard. Creativity, taste, and brand literacy are now the differentiators.

    3. Q. Most underrated creative skill AI will never replace?

      A. Making things feel alive – emotional timing, intentional imperfection. Humans connect to flaws; AI loves sterile perfection.

    4. Q. One word to describe your creative process with AI?

      A. Superpower. 

    Final Thoughts: Creativity Is the Differentiator

    AI made me a better designer – not because it replaced me, but because it freed me. I spend less time on grunt work and more time on what makes the work actually good: timing, emotional payoff, design clarity.

    It forced me to get clear about ideas earlier. It helps me visualize shots I could never afford to shoot. And it sharpened the line between what’s generative and what’s genuinely creative.

    Creativity, taste, and brand literacy are now the differentiators – not technical ability. AI raised the bar. It’s up to us to rise above it.

    Share this article
    Back to top