Generative AI: The next creative revolution has already begun
First published on WARC
Oliver Feldwick of The&Partnership explores the recent history of AI and creativity, addresses its risks and lays out some rules for an AI-assisted advertising industry. If last year was defined as the year of Web 3, NFTs, and the metaverse, 2023 is shaping up to be the year of Generative AI. This article examines how the main areas of Generative AI can work into the creative process.
Why it matters
Generative AI has the potential to be the biggest disruption specifically to creative industries and roles in decades. It is not an option to ignore the coming changes. Great disruption brings conflict, opportunity, and challenge.
Takeaways
- There are several ways to combine Generative AI with workflows and create automated creative management platforms – to connect together the whole process, from brief inspiration, through creation, delivery and distribution.
- But Generative AI is not perfect. AI systems don’t understand context, nuance or intent outside of the prompt and the learning data they are operating from, and so there’s a risk of creating convincing looking nonsense.
- Other risks include IP/trademark issues and the real danger of algorithmic bias. The industry needs to embrace AI ethics to properly interrogate how these new tools work, the data they are using and the implications for the industry, and society.
- Generative AI isn’t a plug-and-play miracle cure. However, it can be helpful across agencies and creative briefs right now. Experimenting today is the only way to get ready for bigger changes and evolutions tomorrow.
The revolution is near…
While Web 3 has seen a boom and quite a spectacular bust, there are reasons to think that Generative AI will have a more immediate, wider reaching and deeper impact on our industry and our lives.
Generative AI is a relatively new term encompassing key, rapidly improving technologies using AI models specifically designed to generate content based on a prompt, generally trained on a set of data such as images or text. The models and deep learning processes are technically complex, but the end-user experience is seductively simple. Users give a prompt to the AI model, usually via text, and it then generates multiple outputs for the user to choose from.
The invention of computer graphics transformed our industry. Try to find an agency that doesn’t use Adobe tools, visual effects, digital editing.
The next generation of tools are being created right now. Fortune will favour those who learn how to embrace and use them.
So what is Generative AI anyway?
The main tools cover ‘text-to-image’ tools – with Midjourney, Stable Diffusion and DALL-E2 as the most widely used, along with ‘generative text’ tools like ChatGPT and other GPT-3 based models. There are also tools for AI video manipulation, such as by Synthesia or D-ID, or for synthetic video generation techniques (AKA deepfakes) which have huge practical application outside their more controversial uses.
The key underlying criteria for them, is that they produce novel outputs, based on the input, that are qualitatively ‘good’, in a somewhat predictable and repeatable way. It is not enough to just churn out random text, images and noise, but also it must be creating something new and not just regurgitating other people’s work.
It’s all just a little bit of history repeating
Looking at previous creative revolutions, we can see similar patterns and take important lessons. That the recent creative revolutions of computer graphics and 3D animation.
As chronicled in Ed Catmull’s Creativity Inc , or in more detail in Alvy Ray-Smith’s Biography of the Pixel, it can be easy to forget how radical, and how transformative this was.
Dire Straits’ Money for Nothing was the first MTV music video utilising computer animation. As a precursor for what was still to come, looking at it now, it’s impressive for how good it looked for 1985 but also for how much technology has moved on in the generations since.
Dire Straits, Money For Nothing, 1985
Now photorealistic computer graphics are commonplace and an integral part of our creative culture:
Lightyear, 2022
As new technologies come in, they bring change, they disrupt the status quo, they require rethinking workflows, practices, and businesses.
When computer animation and graphics were being developed, those most hostile to it were traditional illustrators. 2D Illustrators often dismissed it as a gimmick or distraction, laughing at the idea that a movie made by a computer couldn’t convey true emotion.
Turns out, they were right to be worried; Disney pioneered illustrated 2D-animation but has now announced they will no longer do conventional 2D-animation, moving to all computer generated 3D-animation.
The reasons behind Disney’s shift are ones we’ll see repeated with Generative AI. It can create great results, faster and cheaper, and allow for more time and resource to be dedicated to coming up with compelling storytelling.
Creative industries get ready
When new creative tools and technologies are developed, they go through a cycle – denial, mocking, excitement, adoption. These stages are playing out around Generative AI in the space of creativity.
Denial: flat-out rejecting that AI creative tools can live up to the task – either relegating them to other industries or arguing for exceptions and exclusions.
Mocking: making fun of early attempts and outputs and constructing examples where the AI creativity is least likely to succeed.
Hubris: a phase of hype, wonder and experimentation with a new tool to see what it can do, as creative practitioners learn and explore the new potential.
Adoption: as the magic gives way to practicality, look at how it can be utilised and incorporated into existing workflows.
All of these behaviours have been present in the past months and years. From defensive op-eds, to over-hyped Twitter threads of enthusiastic experimentation. Even with Nick Cave calling out Generative AI song writing efforts as “a grotesque mockery of what it is to be human”. It’s clear that the creative industry is paying attention.
How can the main areas of Generative AI work into the creative process
These tools can map onto every stage of the creative process, playing a wide range of complementary roles.
Using text or image generation at each of these stages can open up a new model for how we can work.
Inspire: accelerating desk research, shaping the strategy, developing a creative springboard.
- Using a text generative model, to synthesize an array of research and generate some inspiring or interesting initial propositions, insights or language.
- Alternative uses could include: an editor, a research assistant, a moodboard visualiser, or a workshop participant.
Create: a creative partner or assistant who can either inspire new thinking, or refine and visualise existing thinking.
- Getting inspiration and assistance from generative AI to come up with visual moodboards, language and phrasing, write-ups and initial ideas.
- Alternative uses could include: a superpowered Thesaurus, a visualiser, or an editor/writing assistant.
Deliver: creating and refining assets at scale, from SEO blogs and web copy to stock images.
- Using generative AI tools to create long-form content based on simple prompts and inputs for SEO and web copy.
- Alternative uses: design assistant, music & effects generator, post-production assistance.
Distribute: versioning, checking, optimising and learning based on performance.
- Taking pre-approved assets and messages and combining them to put them live in A/B testing media, and then tweaking the asset based on the feedback.
- Alternative uses: automated Brand Guardian, asset versioning for media formats or analytics support.
Below are illustrative images generated in minutes using Midjourney – from UX inspiration, to interior design moodboards, to product concepts and packaging mock-ups.
Bringing this all together, there are several ways to combine Generative AI with workflows and creating automated creative management platforms – to connect together the whole process, from brief inspiration, through creation, delivery and distribution.
Not if, but when
This isn’t just hypothetical wishful thinking. Although there are still complications and hurdles to overcome, there are many precedents.
The Economist already used it to create a front cover in June:
AI-generated artwork has been entered into competitions, and won prizes:
Jason Allen, First prize, Colorado State Fair, Manipulated Digital Imagery category, 2022
- The techniques are widely used in virtual movie production, music creation and design across the globe.
- Most of the tools we have just explored haven’t been designed with advertising creativity in mind. We are at the tip of the iceberg of how we can tailor these tools to our tasks. With dedicated and specially built AI models, we can go much further. Current AI models don’t worry about things like brand guidelines, or media specifications – but it wouldn’t be difficult to include these within an AI model.
Confronting the challenges of Generative AI
However, these changes bring disruption, challenges, and controversy. We don’t have the regulatory or legal frameworks in place, there are many risks of algorithmic bias or problematic usage. And it is not a magic holy grail solution.
We can map the risks and impacts, ranging from clear examples of where generative AI is less successful, but also even when the tools are working as intended it may have unintended or undesirable impacts on society and the industry.
Internal challenges are ones that negatively affect creative industries more, whereas external are negative impacts on the world:
Not working well, internal = Convincing looking nonsense
Firstly, Generative AI shows an enormous amount of promise, but it is not perfect. The ‘uncanny valley’ is a concept in robotics that ever more human-looking robots somehow look more and more alien. Many AI artworks still look a bit unusual, with artefacts of the AI left behind, such as generating people with extra fingers.
AI generated text can read well but be factually incorrect. A real risk is ending up generating convincing sounding or looking ‘nonsense’.
AI systems don’t understand context, nuance or intent outside of the prompt and the learning data they are operating from. They will, for the moment, require oversight, input and editing to make the outcomes work. It can often be quicker, easier and cheaper to do something manually, especially if you have a very clear idea of what you want.
Lastly, they are good at creating ‘good enough’, but in the subjective world of creativity that will not always be ‘enough’. We may unleash an explosion of rubbish.
To overcome this, we need to ensure that we work with the machine, we understand how it works, and we refine and elevate our role as overall creative director of the process.
Not working well, external = Off-brand, biased, illegal or discriminatory work
Generative AI is also built off large datasets. This works very well in most cases. However, without proper scrutiny, understanding or oversight, it introduces a host of new problems.
Algorithmic bias is a real problem. Biased or limited datasets lead to biased outputs.
As we make progress as an industry to get representation right, AI could undo some of that, through unintended choices in language or visual representation. For example, while relatively easy to ensure AI isn’t overtly racist or sexist, it can be, in a more accidental and insidious way; AI can easily portray a certain aesthetic, or make unintentionally coded or biased language choices.
Another challenge is that the learning data is often constructed from other artists work, or from protected and trademarked areas. If your AI generates something that breaks copyright, or replicates a protected artistic style, then this introduces a new legal challenge.
To overcome this, we need to start taking AI ethics seriously across the board, and to properly interrogate how these new tools work, the data they are using and the implications for the industry. Experimenting is great, but ignorance is not an excuse as we start to apply these tools more broadly.
Working well, internal impact = Damaging a fragile creative ecosystem
Assuming we can avoid the above problems, we end up with a powerful, disruptive, almost limitless creative engine to plug into our industry.
Creativity is difficult. We can’t simply mine it out of the earth or intensively rear it. Margins are thin. The conditions for compelling creativity that works are delicate and complex and has already seen a huge amount of disruption.
We could end up outsourcing the wrong things, automating too fast and too blindly, and, along the way, lose the magic and the craft. If all agencies are simply inputting prompts into AI tools we lose our competitive advantage. Tight budgets split across SaaS fees will be ever more stretched.
If Generative AI is not well implemented, we will damage what works today, but also not realise the potential of AI.
It is not a magic silver bullet solution and it will take time and effort to tap into this potential. To overcome this, care and effort must be taken in understanding how generative AI fits into our process. Starting with a fresh look at the creative process, understanding complementary strengths of human and AI.
Working well, external impact = Unintended and unexamined impact on culture
Lastly, assuming we overcome these hurdles, we are still unleashing a new force on the world. Advertising and creativity has an often overlooked role and responsibility in shaping culture.
Dangerous beauty standards, the concept of anti-aging, the diet industry, rampant consumerism, and conspicuous consumption have, in part, been constructed by advertising.
One potential danger of AI is that it takes unusual and possibly undesirable approaches to achieve its goal. Tasked to maximise sales, an AI might manipulate people, or shape culture in a way that we would not want.
Safeguards have been built into most of the Generative AI tools to avoid the most egregious things – GPT shouldn’t write you racist text and DALL-E2 and Midjourney limit creation of sexual images and phrases like ‘gore’. However, more complex societal issues require more thought and more care.
To overcome this, we need to create a charter for ethical AI in advertising, but also build in explainability and governance to any tools we use. It’s not good enough to use something just because it works – we need to understand how and why it works to ensure it won’t have unintended consequences. It also means we can’t expect to leave the machines to themselves – we must work with them to ensure we use it in a way that is right for citizens, the industry, as well as our brands and our businesses.
So what should agencies and creatives be doing right now?
This isn’t a plug-and-play miracle cure. However, it can be helping across agencies and creative briefs right now. More importantly, learning and experimenting today is the only way to get ready for bigger changes and evolutions tomorrow.
Four key things to start today:
Promote: sharing education, encouragement, and skills across the organisation. Familiarity and understanding are key for different individuals to learn how to use the platforms. Identify and connect those who are enthusiastic into a creative community, setup monthly sharing sessions.
Play: get everyone exploring and playing with the tools, by creating a supportive space for it, carving out additional time, leading by example. Run an internal AI art competition. Use it to design some internal art or comms. Showcase what individuals have done in your organisation. It also means earmarking some training and licenses budget. Many of the platforms are free or low cost to start experimenting with, but it’s worth encouraging adoption by supporting or funding this.
Pilot: more structured programmes to integrate generative AI tools into workflow. Audit and identify the different creative tasks and briefs for the year and design a testing and learning plan. This will come with some resource cost but delivers potential learnings and efficiencies in the long term. Define what you hope to learn with each initiative and manage expectations internally and with clients.
Plan: focus on how we can scale and implement learnings more widely across the business. Whether a business investment and roll-out plan, or a strategic approach for how to mitigate risk or develop capability. There will be an inevitable rush to use these tools for an award or headline-grabbing idea (and there is nothing inherently wrong with this). But a scattergun approach like this will ultimately fail to achieve the wider reaching longer-term benefits.
The revolution is near. Get ready
Generative AI has the potential to be the biggest disruption specifically to creative industries and roles in decades. It is not an option to ignore the coming changes. Great disruption brings conflict, opportunity, and challenge. Knowing yourself, the landscape and the changes will give you the best possible tools to surf this next wave of disruption.
Written by Oliver Feldwick Head of Innovation, The&Partnership