Published April 2026
There's a lot of noise around AI right now. Every platform, every demo, every LinkedIn post makes it look like the whole industry is about to be replaced overnight. And if you're a brand or a marketing team trying to make sense of it, it's hard to know what's real and what's just a good sales pitch.
I was testing one of the major AI platforms on fashion visuals, and the results were solid. A few odd outputs, a fair bit of token spend, but overall it looked promising.
One issue stood out. The crowds still looked off.
The platform responded that it was a temporary problem and that it would be solved in six months. That's the line you hear a lot. "it'll be fixed soon," "just wait for the next update," "this is just the early version." And to be fair, AI is improving. But "it'll be better soon" isn't a production strategy. You can't launch a campaign on a promise.
It wasn't fixed then, and the fundamental issue hasn't gone away, because AI doesn't really understand the world volumetrically. It's predicting based on training data, making a best guess that leans heavily on what it's seen before. That's why certain things like crowds, movement, or anything spatially complex still break down.
Unless you take an AI-first approach and accept what it gives you, which can work, but it's a completely different way of working.
AI can produce strong results in static work, especially when nothing needs to hold up too closely. For animation, where things need to behave properly over time, it's still not there yet.
AI works best early on. It's fast for testing ideas, exploring styles, and building rough directions. You can try things quickly and move forward with what works, without committing production time to something that might not land.
The problems show up when things need to be right. If the product needs to be accurate, consistency matters across multiple shots, or visuals need to hold up commercially, AI starts to fall apart. One strong image is easy. A full campaign is harder.
Even when AI generates 3D assets directly, the output isn't something you can use. Messy geometry, overlapping faces, broken topology that falls apart when you animate or light it. Materials baked in rather than controllable. Nothing built to a consistent standard you can drop into a real pipeline.
And if you want to get really technical, AI can't generate 16-bit data. That doesn't matter until you need to grade the footage or you've got dark areas in the shot, and then banding shows up fast with very little you can do about it in post. In a proper CGI pipeline you're rendering 16-bit or 32-bit linear, which gives you all the headroom you need. The only people in a position to train AI on 32-bit linear data are the likes of James Cameron and ILM, and I'm sure they're working on it, but you won't be getting access to those tools or that IP anytime soon.
A client or agency generates something that looks great as a concept (a still image, a rough 3D model, a visual direction) and then realises it can't actually be used. It can't be animated, it can't be lit properly, it doesn't hold up at the resolution they need, or it just doesn't match the product accurately enough to use commercially.
So the work ends up being rebuilt properly in Cinema 4D and Redshift, keeping the creative intent but making it actually function. That's becoming a bigger part of the workflow. The strong concepts come from AI, and those ideas get rebuilt in 3D with proper geometry, materials that behave under lighting, and assets you can reuse across a whole campaign without things falling apart.
Checkpoint Digital Product Passport film. The product, environment, lighting, and camera work were all built properly in 3D. Everything controllable down to the smallest detail. But the background crowds were AI-generated. Speaking of crowds, this is exactly where that limitation works in your favour. They didn't need to hold up under close inspection. They just needed to add life to the scene without adding weeks to production.
That's where AI works well. Supporting the pipeline, not replacing it. View the project.
If speed matters, you're early in the process, or the asset isn't the hero, AI can help. If the work needs to be accurate, consistent, and hold up commercially, CGI is still the better option.
Used well, AI adds speed as part of a hybrid workflow. Used badly, it creates more problems than it solves.
Use AI to explore. Use CGI to deliver.
If you're weighing up AI, CGI, or a mix of both, tell us what you're working on. Most clients hear back the same day.
Weasel Creative is a UK-based 3D animation studio specialising in 3D product animation, motion design, and CGI video production. Based in Farnham, Surrey, we work with B2B and B2C brands across manufacturing, retail technology, FMCG, and electronics to create clear, high-quality visual content. Our services include CGI product films, FOOH campaigns, ecommerce CGI, trade show content, industrial animation, product visualisation, CGI explainer videos, and motion graphics for marketing, product launches, and technical communication. We serve brands and agencies across the UK including London, Surrey, Manchester, Birmingham, Bristol, Leeds, Edinburgh, Brighton, Cambridge, Glasgow, Nottingham, Sheffield, Oxford, and Cardiff. Clients include Disney, Diageo, Focusrite, Novation, Checkpoint Systems, and Amazon. Built in Cinema 4D and rendered in Redshift — cinematic 3D animation for marketers and brands that sell.