McArthurGlen · 2024 — 2025

AI Photo Production.

One studio shoot trains a per-talent diffusion model. From there, a stylist art-directs every new campaign through a structured prompt schema — stills first, then animated — without bringing the talent back.

Still GenerationMotion GenerationFine-TuningFashion / Retail
12actors capturedone Amsterdam studio shoot
20+LoRAs trainedone reusable likeness per talent
500images per datasetpose · wardrobe · lighting variation
Flux.1 devbase modelfine-tuned per talent
From studio day to digital twin
01

One studio day. Every campaign that follows.

McArthurGlen runs a roster of named talent across seasonal campaign cycles. The status quo was a fresh shoot per talent per campaign — booking the talent, the location, the wardrobe, and the crew, every time. We replaced that with a single annotated studio day per talent, captured for training.

From the trained likeness, a stylist art-directs every new campaign — pose, wardrobe, location, mood — without bringing the talent back. Stills first, then animated through an image-to-video pass. Production economics shift from cost-per-shoot to cost-per-prompt; cadence shifts from one campaign per booking to dozens per week.

Studio shoot — diverse poses, outfits, lighting stages
Studio shoot — diverse poses, outfits, lighting stages
02

The talent doesn’t come back. The likeness does.

The first campaign earns the studio day. Every campaign after that is a digital twin operation — same likeness, new pose, new wardrobe, new setting, no re-booking.

Per-talent LoRAs trained on a structured dataset give the team a controllable version of every model on the roster. The brand keeps full art direction; production economics shift from cost-per-shoot to cost-per-prompt.

Original studio referencereal shoot
lora generation
03
01

Preparation

A comprehensive Amsterdam studio shoot captures each talent across diverse poses, outfits, and lighting stages. An annotation pass produces a clean, schema-aligned training dataset.

02

Generation

A per-talent LoRA trained on Stable Diffusion gives each model their own trigger token and a controllable likeness. Two modes: prompt-only for blue-sky exploration, prompt + control image for tight layout matching.

03

Adding life

An image-to-video pass animates the chosen stills. Same digital twin moves from print into motion without re-shooting a frame.

04
  • Dataset preparation & LoRA training at scale

    Owned the full pipeline from studio capture to trained model — annotation taxonomy, dataset curation, and the training run itself. Trained every per-talent LoRA on 8 GPUs in parallel, taking the roster from one trained likeness at a time to a full set in the same window.

  • ComfyUI workflows + cloud GPU environments

    Built the ComfyUI workflows the wider team used for generation, and stood up VM-based environments on L40S and H100 GPUs so artists could run them at scale without local hardware. Workflows and nodes auto-updated across the fleet whenever I shipped changes — every artist always running the latest version, no manual sync.

  • Custom ComfyUI nodes

    Built bespoke nodes the team needed but ComfyUI didn’t ship — including masking helpers and agentic-workflow alternatives to Griptape. Lower friction for stylists, no waiting on upstream.

  • Structured prompt schema

    Authored the schema stylists fill in instead of writing prompts from scratch — identity, pose, clothing, setting, aesthetics, anchored to a per-talent trigger token. The reusable IP: once it works for one talent, every new digital twin reuses it.

05
06