Your organisation probably has an AI policy. Your suppliers – photographers, illustrators, and stock libraries definitely have terms governing how their work can be used. Your designer is almost certainly using some AI tools as a routine part of their process. The chances of all three being aligned are low. And that feels like a big grey area, especially for organisations such as charities, public sector, education and purpose-led brands.
It isn’t usually a disaster, and there is definitely no need to panic. But as AI becomes more embedded in standard design tools, it’s worth understanding where the gaps are before they become a problem on your next design project.
Your suppliers’ terms
Assets come into design projects from a range of sources. Photography you commissioned, illustration created for a report, stock images licensed for a specific use, brand assets supplied by a partner organisation. The agreements governing them may have been written before generative AI was a practical tool. In many cases, they won’t mention AI at all.
Where they do mention it, the language is often broad. A flat ‘No’ on AI use sounds clear until you try to apply it to specific situations. Did the photographer mean their images can’t be processed by any AI tool? Does that include standard re-touching? Or were they designed to prevent something serious, such as their work being fed into a training dataset, or used to generate imitations of their style? The gap between what was intended, what the agreement says, and what production often requires is worth understanding, because your designer is probably working in that gap right now.
Your own AI policy
Most organisations’ AI policies are written to govern how staff use tools like CoPilot or ChatGPT. What they can and can’t ask it to do, what data can be shared with it, what outputs can be used and how. That’s an understandable place to start, but it leaves a blind spot in terms of what external suppliers are doing with AI on your behalf.
If your policy says your organisation doesn’t use AI to produce creative outputs, does that cover what your designer does in their software? If it restricts AI use of personal data, does anyone know whether images are being processed by generative tools as part of normal design or development workflow?
Here’s an example that came up during a recent project.
Generative Expand is an AI tool built into Adobe Photoshop that most designers use regularly as part of their workflow. It can extend an image in one click, generating new pixels that are seamless and indistinguishable from the original. The concept isn’t new, it’s the kind of thing designers have been doing manually for decades to get images to sit better on pages or to create space for a headline or text over an image. But now, AI has generated that new content within or around a supplied asset, and that feels different somehow. It’s also where an agreement or policy predating, or silent on AI starts to fall apart.
While Adobe has positioned it as ‘commercially safe’, it’s trained on licensed Adobe Stock images and public domain content, to try and avoid copyright issues, that only covers Adobe’s liability. It doesn’t resolve the question of what your photographer’s contract says about their images being processed by AI, or what your organisation’s own AI policy covers.
A less obvious example: Adobe InDesign now auto-generates alt text for placed images using AI. This is useful for accessible PDF production, but the default output includes an “AI-generated” tag that gets embedded in the document and read out by screen readers. If nobody notices and doesn’t disable it, anyone using assistive technology to access your publication will hear that tag. For an organisation with a policy about AI-generated content, or one that has made commitments about how AI is used in its communications, that feels wrong. Although I’m not sure who wants that being read out in their documents, really!
Before your next project
Obviously, it goes without saying that all of this is worth reviewing with a qualified lawyer, and I am not one. But there are a few things worth thinking about and looking into before a project starts:
- What assets are being supplied, and do you know what the terms covering them say about AI?
- Does your organisation’s AI policy account for what external designers and suppliers do on your behalf, or only what your own team does directly?
- Have you told your designer about any restrictions, your own policy or your suppliers’ terms that might affect how they work?
- Do you know if and how your designer is using AI in their work so you can check that against policies and contracts?
- Are there assets in this project involving identifiable people, sensitive contexts, or partner brand guidelines that need particular care?
- If your agreements are silent on AI, is that a conversation worth having with your supplier before work starts rather than after?
This is an area where there aren’t clean universal answers yet. If you’re commissioning a project and any of this feels relevant or unresolved, raise it early. Or get in touch, and we can work out what makes sense for your specific situation.
