I'm sorry Dave, I'm afraid I can't do that.

This page is not supported on mobile.
AI as a collaborator in the
design industry
Contents
01 Introduction
02 Different AIs, and their roles in our world
03 What can we expect from AI's role in the coming decades?
04 Conclusion
Introduction
The term AI, or Artificial Intelligence, has multiple connotations in the modern age. For many people, humanoid, sentient robots, such as Ava from the film Ex Machina, come to mind. Other hi-tech developments, such as self-driving vehicles and innovative healthcare such as the new Google developed AI, that is able to identify cancer better than trained radiologists, have been well highlighted in the media.

However, the reality is AI is now ubiquitous, enhancing many tiny, diverse, aspects of our day-to-day living. It can find us the quickest route to drive to work by monitoring and evaluating traffic patterns. The major tech brands utilise it to tailor advertisements, films, articles and much of what we consume, by perfecting detailed profiles of each user. While we may be aware that AI is all around us, there is such a proliferation of applications in our everyday lives that it can be hard to know if each has a positive effect on our lives.
So, how do we define AI? To a degree, this is subjective, as there are so many definitions in circulation. For many, AI is
artificial intelligence
noun
When a computer has developed, or been given the ability to not just think, but learn from its experiences.
This is the definition Google presents when Artificial Intelligence definition is searched.  However, I do not consider this definition to be entirely accurate as there are numerous instances where a computer can do both thinking and learning without it being considered AI.
Random number generation is a computer process that far predates AI, and sets the foundation for many computer services we use, such as encryption and videogames. RNG is used by a computer to generate numerical values either seemingly randomly, known as
pseudorandom
adjective
(of a number, a sequence of numbers, or any digital data) satisfying one or more statistical tests for randomness but produced by a definite mathematical procedure.
or genuinely random factors known as
true random
noun
any event or process that occurs without a cause or source.
This is just one of the actions a computer performs that could qualify as a computer ‘thinking’. Similarly, the concept of a computer learning is an even older construct, and by its broadest definition could include the simplest computer storage system such as seen in a calculator. Although, the more abstract technology used in true-random RNG, the computational process that can theoretically generate values in an entirely unpredictable fashion, isn't considered AI.
To keep this dissertation consistent, I am going to use a definition of artificial intelligence found in Jon Rogers and Andrew Prescott's short essay, ‘AI Rules?’. They define this as
“Computational methods to mimic intelligence and creativity"
(Rogers and Prescott, 2018)

For me, creativity is the crucial word here. Creativity is what distinguishes a computer being smart from a computer being powerful. To label any instance of a computer doing something better than a human as Artificial Intelligence, is almost as inappropriate as comparing a boxer to a writer. Great boxing is like traditional computing, it only achieves by performing more efficiently, so with harder hits, or faster CPU's. Great writing, like true AI, can only be achieved by adopting an unconventional methodology, such as rethinking the task entirely, and approaching it from a completely new angle.
This idea of creative computing inevitably raises the issue of AI's relationship with human creativity, specifically for me as a graphic design student, the type of human creativity I am most familiar with: Design. It begs many questions which, as the technology evolves, lie at the heart of what it means to be a graphic designer.

– How and when do we use AI in Design?

– Do we use it because it can assist us to be more creative, or just because it can make things easier for us, even making us lazy?

– Are there applications in the design industry which should be reserved solely for the human mind?

– Should we credit AI's use in works, as if it is another member of a team, or maintain that AI is just another tool for designers to use?

I aim to address these questions in this dissertation. Chapter One examines the manifestations and applications of Artificial Intelligence in the world around us, outlining instances of AI being utilised to both positive and negative effects in the design industry and observing how our relationship with it is starting to form; Chapter Two explores what we can expect from AI's role in the coming decades. For this speculation I will draw comparisons to the much older technological advancement of automation in a hope to predict how AI’s future may pan out. I will also discuss some of the concerns that many people share regarding AI alongside some of the moral and ethical issues that surround the subject.
Different AIs, and their
roles in our world
It is important to understand the term Artificial Intelligence is broad, covering a multitude of differing programs that can work in very diverse ways, and present an array of results. In 2016, Arend Hintze, an AI researcher at Michigan State University, wrote a short article in response to the 'White House report on Artificial Intelligence', published that same year. Hintze outlined four categories of AI that he feels are essential to understand before overcoming
"the barriers that separate machines from us – and us from them."
(Hintze, Theconversation.com)

The four types of AI Hintze defines are:
Type 1 - Reactive Machines
This category of AI is what most non-AI researchers will be familiar with, as it's by far the most closely related to how computers traditionally work, and consequently simple and easy to understand. Reactive machines essentially run off a set of parameters and base their actions on them. Unlike the other three categories, this type of AI does not reference its past to inform future decisions. It can only think forward and base its decisions on what it has always known.
Type 2 - Limited Memory
In contrast, this form utilises its own, as well as considering other histories, to make increasingly more sophisticated decisions. An example of this is that Google DeepMind, a model-free AI.
model-free AI
noun

systems that are versatile enough to use in a range of tasks without adaptation.
Deepmind doesn't need to know the rules of chess to play. Instead, it learns through reinforcement-training, carrying out its function in a trial and improvement approach. Taking numerous, sometimes hundreds of thousands of attempts at a task, while internally playing against itself and simultaneously constantly noting the things that did and didn't work. It then uses the successful elements of previous iterations to implement and improve future attempts.
Type 3 - Theory of Mind.
Currently we have not developed any AIs that fully qualify in this class as it is defined by an AI’s ability to truly consider the world around it, and so develop their own opinions. For this type of AI to exist it would require access to more types of information than we have ever come close to giving an AI. For an AI to truly make a decision that's not random or predetermined in any way, it must be allowed a similar quantity of information and experience as a human brain accumulates throughout its life. At this point, this is a truly incomprehensible volume of factors and experiences.
Type 4 - Self-awareness.
This supposedly dangerous form of Artificial Intelligence is probably the one we are most familiar within a fictional context. Every time a rogue AI, such as the HAL 9000, the computer in Kubrick’s 2001: A Space Odyssey, starts questioning its purpose, it could be considered that it has a degree of self-awareness.
It is too early to say if we will ever be able to develop Artificial Intelligence as sophisticated as this. But if we do, it could potentially be the first true Artificial form of Intelligence to realise Alan Turing's famous definition, described in his seminal 1950 Mind Article, Computing Machinery and Intelligence. This states that the test
"Is to determine if a machine has human-level verbal behaviour."
(Shiber, 2007)

Verifying the Turing Test would be the Holy Grail, which proved that true Artificial Intelligence is achievable. And it would have to consider the developments in physical robotics in the past fifty years.
The choice of the word collaborator in the title of this dissertation is entirely intentional. It is often speculated that AI will replace human workers in certain industries. However, what is discussed less often is how AI is entering the role of collaborator instead of acting as a replacement. I feel this concept of AI assisting human workers is far more prevalent in today's Design Industry, especially as detailed in the Type 1 model of AI.
A particularly relevant example of this type of collaboration can be found in the 2018 article, "AVA: The Art and Science of Image Discovery at Netflix". The team that wrote this article includes several software Netflix engineers. The article explores how a group of Type 1 AI systems, known as Aesthetic Visual Analysis or AVA is implemented by Netflix as part of a thumbnail production pipeline. With a proposed budget of 15 billion USD for content in 2019, it is undeniable that Netflix outputs a vast amount of visual content, which needs to have thumbnails generated for it. Without AVA the process would start with someone sitting through the entirety of a given TV show or film, picking out frames suitable for a thumbnail. Of course, no single person could realistically watch all this content so the task would be divided up among potentially countless thousands of people. As well as the immense cost involved, the human factor would inevitably result in inconsistency in the images selected.
Instead, Netflix opts to use AVA which analyses the media to pick out a selection of suitable frames for publicity. It does this automatically and efficiently, by basing its selection on numerous factors, such as motion blur, face recognition and compositional traits. At this point, the collaboration with a human designer takes place, and the remainder of the pipeline’s production is carried out by a human designer, who can double-check the suitability of the chosen frames and complete the process. To conclude the process, the designer adds text and applies minor tweaks, such as cropping, to complete the thumbnail.
I admire this combination of computational efficiency and human attention to detail: It leads to an optimised and more efficient process. It seems sensible to allocate the time-consuming and repetitive tasks to machines and keep human workers for more creative and abstract jobs which benefit from the personality that a human touch can lend.
A likely, and not so unintentional, side-effect to this approach opens the option to Netflix of variable thumbnails. With a faster way to generate thumbnails, Netflix can tailor these options to a specific user, by applying the use of A/B testing.
A/B testing
noun

the practice of showing two variants of the same web page to different segments of visitors at the same time, and comparing which variant drives more conversions.
(VWO.com)

This would lead to more personal, fulfilling experience on the platform, and consequently result in subscribers spending more time on the site.
Netflix clearly wishes to have more viewers, and for those viewers to have a better experience. As their chief executive officer, Reed Hastings, states,
"We’d like to have everyone on the internet using Netflix."
(Hastings, Bloomberg.com)

This optimised development of thumbnail production is just one ways they hope to achieve that.
AVA is one of the dozens of types of AI systems that collaborate software with designers. Procedural Generation is another Type 1 AI process, frequently used in videogame design. Videogames are for the most part narrative-based, even if the narrative is as simple as level one leads to level two, then three, and so on. However, with recent advances in both ambition for the medium and other technology used in videogames, these narratives have started to take on a non-linear structure.
Late Shift is a videogame by Tobias Weber, released in 2016. The game consists of a story entirely told through live-action cinematic sequences, as in a traditional film. Unlike a film though, there is a gameplay aspect, where every critical decision the protagonist makes in the narrative, is made by the player choosing from one of two options on the screen. For the crew designing this production, this meant that for every decision made by a player, two separate outcomes, rather than a single linear path, had to be written, acted, filmed and edited. The result is a story that gives the player the sense that their decisions matter, but entailed a huge number of additional production hours, and therefore increased budget. There is no way Procedural Generation currently could be implemented into a live action production such as Late Shift but with new technologies such as digital faces emerging this might not always be the case.
A more traditional videogame, Cuzzilo’s Ape Out, released in 2019, attempts to achieve the same effect of presenting a non-linear narrative. However, by employing a PG approach, it took a fraction of the resources and labour than if it had been attempted without PG.
The premise of Ape Out is straightforward. The player is an ape escaping from various states of captivity, by negotiating its way through levels filled with obstacles and enemies, ultimately to attain a goal. These levels are intentionally tough and require the player to make multiple attempts. To maintain a sense of unfamiliarity, as the player progresses through the many stages, Procedural Generation is implemented to shuffle the components of each level. Consequently, the conceptual goal of feeling like an ape escaping the unknown is maintained throughout the inevitable failures and reattempts. The genius of the game lies in the variables Cuzzillo permitted AI to change. Many videogames had previously used Procedural Generation to develop their levels. However, balancing content and variables rarely resulted in a fair or satisfying experience. The developers assigned too much responsibility to the PG, resulting in the levels lacking convincing diversity in their design.
Ape Out, however, allows the PG systems to alter numerous factors such as wall, door and enemy placement to keep the level feeling fresh, but also maintain several constants such as overall level length, density and composition. This led to unlimited AI mutations of the human designer’s base level, so the player’s good memory serves no advantage, and there is satisfaction in defeating each level.
In 2018, as part of a University trip to Amsterdam, we visited creative studios working across a broad spectrum of design. MX3D is a studio which particularly impressed me. A relatively young company, founded in 2014, aims to
"introduce the advantages of 3D metal printing to new, high impact industries"
(MX3D.com)

MX3D's 3D printing process evolved from combining an industrial robot arm, as used in large scale factories such as car manufacturers, with a standard MIG-style welding machine. This simple form of welding works in a similar way to a hot glue gun, but bonds steel. With only small modifications, MX3D attaches the welding nozzle to a robot arm, then programs it to 3D print the client’s approved artwork onto steel.
With the aspiration to become an industry leader in this innovative 3D metal printing, MX3D wanted to create a proof of concept to show this innovative new materials potential. Being based in Amsterdam, a canal bridge seemed an obvious choice. Joris Laarman, one of the lead designers on the project, explained to us one of the dozens of logistical challenges they faced with the production process.  The challenge was a common one found with bridge design, to create a design which reconciled the ratio of high strength with weight. However, when adding strength to this type of mono-material construction, the weight would inevitably increase in correlation. The only solution was to add material to the bridge in specific areas where the strength is increased by a proportionately significant amount.
It was solved using a form of Type 2 AI, known as machine learning as Tom Mitchell states in his book, Machine Learning,
"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance in tasks T, as measured by P, improves with experience E."
(Mitchell, 1997)

So, the team was able to test their initial designs in software which simulated the structure to reveal its weak and strong points. Instead of manually having to go back and forth making tests, the machine learning program significantly sped up the process.
The Artificial Intelligence program constantly cycled through the process, tweaking minute details of Laarman and his team's designs, by running strength simulations to see the effects of the changes. The AI would note the tweaks that positively or negatively affected the weight to strength ratio of the design. They factored them into future iterations, producing a theoretical bridge that could physically exist at this size, with this material. Moreover, the team monitored the cycles and set parameters when needed for the AI to follow, so ensuring the bridge kept certain aesthetic traits as well as work as a functioning bridge. The AI, being Type 2, was not able to understand the context and purpose of a bridge, only how the strength/weight ratio could be improved.
This is, I believe, is a fine example of how human designers and AIs can collaborate, lending their respective strengths to a design project to ensure both an efficient design process, but without comprising the creative outcome.
What can we expect from AI's role in coming decades?
The realisation that collaboration rather than replacement is likely to be both the most efficient and effective way forward has gained traction in recent years. Like AI's great grandfather, Automation, it is in many ways the logical progression for the technology.  In the late 1920s and early 1930s, automation-anxiety was developing as it was forecast that the then-new and rapidly developing technology of automation would start to take over the jobs people were trained to do in factories, such as sewing, or in car production. In a 1928 New York Times article headlined, March of The Machine Makes Idle Hands, Evan Clark wrote,
"In concrete construction, building materials are mixed like dough in a machine, and literally poured into place without the touch of a human hand."
(Clark, 1928)

This puts into perspective how today we think nothing of tasks previously undertaken by people being carried out by machines.
It's easy to see parallels with the fears society then had about automation and our contemporary fears about AI. Clark's skepticism wasn't unjustified with millions of jobs lost during the 20th century to automation. However, the numbers show these fears were for the most part unjustified, as automation increased presence across almost all industries, but created as many employment opportunities as it ended. In large part, this is due to the simple fact that although automation, from sewing machines to the latest version of photoshop, can complete certain tasks better than humans, they will always require a human operator in some capacity. The same could well apply to AI in the coming decade. If we do end up primarily relying on the Types 1 and 2 formats of Artificial Intelligence, it's likely the collaboration will continue, and human workers and designers will maintain a place, working together, in the creative industries.
Professor Gil Weinberg, founding director of the Georgia Tech Center for Music, mirrors this view.  In the 2019 documentary, The Age of AI, he states,
"People are concerned about AI replacing humans, and I think it's not going to replace them, it's going to enhance humans."
(Weinberg, 2019)

Weinberg is known for creating the first robotic musician, Shimon, the marimba playing robot. Shimon can take inspiration from human marimba players and uses machine learning to compose and play its own music, which has been performed globally.
Weinberg's project then created a prosthetic arm for an amputee drum player, which consisted of two drum sticks, one performing the typical action of a prosthetic, enabled him to play again. The stick however would be AI directed, using the same AI systems used in Shimon to enable the drummer to play what's known as poly-rhythm, which is impossible for a traditional drummer and supposedly even better.
What could potentially set Artificial Intelligence apart from Automation is the next significant step, the creation of Type 3 and 4 AIs, ones that can select their parameters, set themselves their own goals, and imitate human consciousness flawlessly. These next levels might not be so willing to collaborate with humans. It is speculated that these AIs may be able to develop themselves exponentially and possibly even bring about the end of humanity. While this may appear to exist solely in the realms of science fiction, many of the world’s brightest, most forward-thinking minds feel this possibility to be a genuine threat, with a similar potential as climate change to impact upon our lives. Author, neuroscientist and philosopher Sam Harris states in his 2016 TED Talk,
“Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead”
(Harris, Ted Talk, 2016)

Harris proves his point numerous times throughout the talk by making jokes, but which are in fact predictions about AI's future. It seems that decades worth of fictional AIs have desensitised us to the reality and possibility that one day these developments may really occur.
The priority should lay in planning what rules and standards we should set for ourselves and examine our ongoing relationship with machines. Before AI Types 3 and 4 become a reality and part of our lives, we need to be ready with a new form of law that protects us both. At some point in the future, these programs will pass the Turing Test and be indistinguishable from humans. Surely then we need to address if this qualifies them to have rights? We could even use the Three Laws of Robotics from Isaac Asimov’s 1950 fiction novel I, Robot as a starting point.
“First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws”
(Asimov, 1950)
In their previously mentioned essay, AI Rules, Jon Rogers and Andrew Prescott ask,
"Is Alexa a company slave? How ethical is your relationship with her?"
(Rogers and Prescott, 2018)

The question raises how it would be considered unethical to place an employee, friend or person in the corner of your room 24/7, then demand they order your toilet roll, sing you a song, or provide you with the weather forecast.
Perhaps as interesting is the author's use of the pronoun "her" to refer to Alexa. This is common practice for users of personal assistants such as Siri and Alexa. The machine has a female voice so we subconsciously use female pronouns despite the AI lacking gender. Interestingly, I found that Siri, Apple's AI personal assistant, will respond,
"I exist beyond your human concept of gender”
when asked what her/his/its gender is. This is, of course, a scripted response that likely came from many hours of debate and PR research at Apple. It would be more interesting to project forward, to wonder how a Type 3, or even Type 4 AI, might respond, and whether it would identify with a specific gender.
A further, equally fascinating debate surrounds the question of responsibility in Artificial Intelligence. Should AI be held responsible for its mistakes? If a self-driving car crashes, does that fall onto the programmer? Similarly, should Artificial Intelligence be credited for the work it generates? Or is a creative work entirely accredited to the designer who implemented it, or should the AI developer be acknowledged?
There are many stances one can take on these questions, but for me it comes down to whether one considers Artificial Intelligence to be a tool or an entity. A hammer can’t be blamed if you hit your thumb while banging in a nail, but you can blame an entity, such as another person for example, for hitting your thumb as it could be considered that they are responsible for their  actions, regardless of how unintentional it may be.
With regard to accreditation, I think the distinction holds up. Filmmakers recognize their cinematographer, not their cameras, in the end credits. The creative act was not that of the camera, but that of the operator. There is a grey area, however, which may become more significant as we develop from the Type 2 to Type 3 AIs. There may be a transitionary phase, we might call Type 2.5, which could make this tool entity distinction more difficult to appraise.
In the 2011, 'Money Selfie copyright Dispute', nature photographer David Slater’s camera was used to take a portrait of a macaque monkey. The complication came from the fact the macaque pressed the remote shutter button multiple times, taking a series of pictures. One of the emerging selfies was remarkable (and saleable), and Slater published it as his own. This sparked a debate which lasted several years as to who owns the rights to the photo: the photographer, whose cameras were used and set up the shot, or the monkey who triggered the shutter. The animal’s rights group, PETA, filed a lawsuit in 2015 in favour of the macaque's ownership. However, the judge dismissed the case, as there was
"...no indication that the U.S. Copyright Act extended to animals."
(NPR.org)

Many believe that this verdict was only reached due to the law’s inability to foresee that it should include animals. Will the same happen when AIs start taking photos, and they are not credited as the authors, just because the laws have failed to adapt?
A significant percentage of developing AIs will theoretically pose no threat to the current human-dominated industry, as their purpose is to do things humans never did or are unable to do efficiently.  It seems counterproductive to worry about robots taking creative jobs when they are creating these same jobs to begin with and are in many cases the only viable creative entity to carry out.
The concept of upscaling was theorised in fiction, such as the famous term “Zoom and Enhance” coined by TV show CSI, but not possible until AI programs such as 'Gigapixel' were developed by Topaz Labs. Comparable to human practices, such as film colourising, where an artist manually fills in absent colour on pre-existing black and white film imagery, upscaling is the practice of filling in absent pixels. In our rapidly developing world where camera images, just a decade old, can start to age, the options with this new technology are limitless, and no human jobs will be lost. And it is extremely unlikely there will be a manual non-AI method for this process. Upscaling is just one of the dozens of predominantly Type 1 AI processes that are either impossible or impractical for humans to carry out.
I believe AI's inherent problem is it being too broad a term, as it embraces a multitude of definitions, and takes hundreds of forms. How can one acronym, that corresponds to the entire field of computer science, be debated so widely, when few understand the difference between it and, the often referenced, but mislabeled term, supercomputing? To effectively comprehend this 'Artificial Intelligence uprising,' which often only refers to Types 3 and 4, we must first understand where we are now.
When will these Type 3 AIs arrive? Given the exponential development within the technology, it is hard to predict. Using Automation's century-long history as a reference, doesn't prove to be accurate. In the 2004 Report, 'The Journal of the Acoustical Society of America 116', Raj Reddy, referring to fifty years of research, stated that,
"Human level speech recognition has proved to be an elusive goal"
(Reddy, 2004)

However, just over a decade later, the technology is near flawless, and used in hundreds of types of computer systems.
In the modern age, it is very difficult to reliably predict technological advancements, even a decade ahead, and with Artificial Intelligence, it is made harder due to periods known as an 'AI Winter'. The name is inspired by the term 'Nuclear Winter', and is defined as the period where AI research, development and progress is significantly slowed, thanks to reduced interest and funding.
This was evident in the late 1980s, as IT developments fell short of what AI researchers had promised.  This was highlighted in the landmark, but in hindsight, short-sighted, Lighthill Report. In 1973, the UK Parliament asked Professor Sir James Lighthill to evaluate the state of Artificial Intelligence research.
Lighthill concluded that AI would never develop to a point where it would serve any real purpose for society. Moreover, he ascertained,
"The general purpose of a robot is a mirage, an illusion of something that may be strongly desired."
(James Lighthill, 1973)

This contributed to the attitude that Artificial Intelligence simply doesn't work, and the idea remained prevalent until the turn of the 21st C, as few, alternative developments were proved. Who is to say that history won't repeat itself, and a second AI Winter won't occur, preventing the next significant leap forward, entirely?
Conclusion
So, looking again at the questions I posed at the outset: how and when do we use AI? And do we use it because it assists to be more creative, or just because we are lazy?
I believe we are slowly but surely approaching the tipping point, which will lead us to Type 3 AI. And the designer's vocabulary should instead be adjusted to question: How should we collaborate with AI?  It will soon play a much greater role in our society than just being the tool we use now.  The importance of being prepared for that cannot be understated.
Addressing the second part of that question I would say AI's rise is not born out of laziness, but curiosity. This same curiosity drove Apollo 11 to the Moon. And we have been rewarded by this technological push, benefiting from the technology used as a result. Douglas A. Comstock wrote in his paper 'NASA's Legacy of Technology Transfer and Prospects for Future Benefits',
"From the mundane to the sublime, these technologies have become part of the fabric of our everyday life, driving innovation, helping the economy, and adding to the quality of life not only in the United States, but around the world."
(Comstock, 2007)

And the thin slither of faith we kept invested throughout the AI winter eventually paid off, and new developments in healthcare, entertainment and, of course, design are attributed to it.
Should there be roles in the design industry reserved for humans? At this point in time, there probably should be. If automation's developing lifespan is anything to go by, AI is in its infancy, and currently ill-equipped to take over many human-orientated tasks in design. I think it should be treated, as a young designer should be treated, starting small and developing as it becomes better equipped to work with higher requirement roles.
Although we should start making consideration to AI law it still feels far too early to say anything as definitive as "Film directors cannot be AI" as that rule would only be based on the AI's of today which only depict the iceberg of AI's true capability.
As with most new technologies, Artificial Intelligence is far from perfect, few are when first developed. Given time to grow and evolve, AI will become an excellent collaborator, both in and out of the design industry. If we are prepared and open-minded, and continue to push the technological limits, this automation scale advancement could truly benefit the creative industries.