Tent Talks Featuring Colin MacArthur – The Designer’s Dilemma: Navigating AI’s Impact on Design

Tent Talks Featuring: Colin MacArthur
Colin MacArthur
Adjunct Professor of Design and Digital Government
Bocconi University
Colin MacArthur is the former Director of Digital Practice, and Head of Design Research, at the Canadian Digital Service. Now he’s an Adjunct Professor of Design and Digital Government at Bocconi University in Milan, Italy. He also advises several organizations on design and research strategy. Colin was once described as a “die-hard artificial intelligence hater,” but has been teaching students to use AI to do UX reseach for several years now.

In this Tent Talks session with Colin MacArthur, we’ll explore the nuanced ways in which Artificial Intelligence, particularly Large Language Models, is reshaping the landscape of design. Colin, an expert in integrating AI with product design, will share his insights into how this technology is not just a tool but a collaborator that brings both challenges and opportunities to the creative process. From the evolution of human creativity in the age of AI to the ethical considerations and educational shifts required to navigate this new era, attendees will gain a comprehensive understanding of what the future holds for design professionals.

Why It Matters: As AI continues to make its mark across various industries, understanding its impact on design is crucial for professionals looking to stay ahead of the curve. This session promises to spark a thought-provoking discussion on the balance between technology and human creativity, the ethical implications of AI in design, and how we can prepare for a future where AI plays a central role in our creative processes. Join us to uncover the strategies, mindset, and skills needed to thrive in the ever-evolving world of design.

Session Notes

Session Overview

In the Tent Talks session with Colin MacArthur, titled “The Designer’s Dilemma: Navigating AI’s Impact on Design,” the conversation focused on the nuanced role of AI, particularly large language models (LLMs) like ChatGPT and Claude, in the design process. Colin shared insights into how AI has transformed design ideation and problem-solving without entirely replacing the human touch. The discussion also covered the unpredictability and transparency challenges associated with designing for LLM-based products, the evolving role of human creativity in the design process, and the ethical considerations and educational adaptations necessitated by AI’s integration into design.

Integrating AI into Design Workflow:

  • LLMs have significantly shifted Colin’s approach to design, especially in the ideation phase, by accelerating the move from idea generation to discernment and filtering.
  • AI’s capacity for generating diverse ideas helps bypass the initial, often time-consuming, brainstorming stage.
  • Despite AI’s assistance, the importance of human judgment, discernment, and taste remains undiminished.

Challenges in Designing for LLM-Based Products:

  • The unpredictability and non-deterministic nature of LLMs pose unique challenges, diverging from traditional design principles that emphasize predictability.
  • Designers are tasked with navigating these challenges by guiding users through the unpredictability of LLM outputs and equipping them with tools to manipulate and utilize these outputs effectively.

Role of Human Creativity:

  • The conversation highlighted a broader, more nuanced understanding of creativity, suggesting that while AI can take over repetitive aspects of creative work, elements like discernment and intuition become even more crucial.
  • Creativity within design is seen as evolving towards leveraging intuition and discernment over mere idea generation.

Ethical Considerations and Design Principles:

  • Ethical considerations remain paramount, with a focus on preventing, reducing, and reversing harm.
  • The unpredictability of LLM outputs necessitates a nuanced approach to design, focusing on outcomes rather than just technical performance.

Changes in Design Education:

  • Despite AI’s integration into design, fundamental design and research processes remain critical.
  • Emphasis is placed on hands-on experience with LLMs, accountability for outcomes, and the development of a nuanced understanding of when and how to trust AI tools in the design process.

Notable Quotes

  • “LLMs… have encouraged me to shift my method of work in some real ways.”
  • “Discernment and taste remain really important.”
  • “Predictability is the hallmark of good design.”
  • “Our work should prevent, reduce, and reverse harm in our societies.”

Reference Materials

  • Nielsen’s Usability Heuristics
  • Interface design principles for LLM-based products
  • Ethical guidelines for technology development and application

Session Transcript

[00:00:37] Chicago Camps: There’s a notion that AI, particularly large language models or LLMs, is neither a complete overhaul nor a trivial update to the design process. Can you share a specific example of how integrating AI into your design workflow has changed the way you approach problem solving or creativity?

[00:00:56] Colin MacArthur: Absolutely.

And I think the question is basically right. LLMs for me have neither completely. revolutionized what design means to me, but they’re also a substantial chain and have encouraged me to shift my method of work in some real ways. The biggest example that comes to mind is whenever I have to generate ideas about how to do something.

In the old way, when you find yourself with kind of a thinker, either a big one or a little one, how to label a button or how to solve a problem that client has. You sit there and you write things in your notebook, or maybe jot things down and stick notes on your desk or something. For me now, I go to an LLM, usually ChatGPT or Claude is pretty good too.

So specifically a few days ago, I was working with a client and there was this button that had to do some really odd things. The functionality of this thing wasn’t quite clear and we needed to figure out how to label it in three words. And so I went to ChatGPT and I said, okay, I’m a designer, we have the system that does this thing.

There’s this button. And can you please come up with 20 suggestions for how we label this button? It came up with lots and lots of ideas. Some of them were completely off the wall, not used to the English language I would recommend. And none of them were perfect, but what the LLM did for us to move us past that first, just churning out ideas phase of ideation and into the, discerning and filtering phase much faster.

The same thing is true when I am trying to think of ideas to bigger questions. As you mentioned, I teach design process to students here in Italy, and they often work on design challenges related to aspects of their life. Being students, one of those common challenges is, can we imagine a better way to find a job after university?

Okay, fair. The first thing we now do is fire up GPT and ask it for all its ideas about how to solve this problem. And that’s not because those are particularly original ideas, but because I can get all the obvious ideas off the table with the students immediately, and start to tweak or start to change or recombine what it says.

The way I think of it for ideation, it’s moving us a few steps forward much faster. Replacing that time when we normally have to sit there and just think of things for an hour or two to get to a design that is either big or small. But it’s not replacing, for me at least, that latter phase of, okay, what’s actually a good idea?

What’s actually the right label? What’s right? What’s the right phase? Discernment and taste remain really important. You’re not typing in there, figure out how to solve this problem and tell me. We’re saying, okay, there are these discrete pieces of problem solving: generating ideas, understanding the particular niche context we’re in, and you can help me do that a lot faster so I can use my human brain to do other things and make progress stuff.

And what’s remarkable is that they usually do come up with different answers every time you ask. Every time I run this exercise with my students, where I say, let’s just get check TPT to give us the obvious answers to fixing your job searching problems every year, we do it.

It says different things, right? And I know that training data is not changing that fast enough. It’s not a deterministic system, but that’s good for creativity, right? That’s good for coming up with new ideas. It’s, it’s actually. I’d sometimes think our field is one of the ones where that’s like the least problems.

I think there are lots of other places where people get worried that the system doesn’t have sort of consistent outputs for every input. But I think for us, using it as a design enabler, that can be a different little prod in your mind. It can awaken different things in your brain, and that’s great.

[00:04:49] Chicago Camps: There are challenges of designing for LLM-based products given their unpredictability and lack of transparency. How do you approach these challenges and what strategies do you find most effective in ensuring these products meet user needs?

[00:05:04] Colin MacArthur: Yeah, fair enough. I think there are a few really important things for designers to understand about LLMs that make them very different than other programs that, that we design around and for, right?

We’re all very used to designing computer interfaces, and we know what the rules are for how to do that. Well, Nielsen’s heuristics and all of the great design patterns. There are some things about LLMs that are completely different, and we have to start there.

One of them is what we were just talking about. LLMs are not what’s just called deterministic. So every time you put something into an LLM, You can put the same question in over and every time it will give you a different answer. So if you think about that a little bit, it’s a major shift to the problem you need to design for.

Normally, our job as designers is to help users predict and prepare for what the system will give them, right? Predictability is the hallmark of good design. I’m not surprising the user with what appears here. But LLMs just don’t work that way, right? That’s not really, we can’t set them up so that we can always tell users exactly what they’ll get.

And sometimes what they get is not useful, and sometimes what they get is wrong, and sometimes it’s shocking. It’s a weird world where we simply can’t guarantee that level of predictability. And so I think one thing I feel like designers need to do when they’re incorporating LLMs in their interfaces is accept.

that unpredictability is there and start thinking about how to guide their users through it, right? So, how to help users, number one, understand, hey, this thing may do some weird stuff, right? And that’s the name of the game. The challenge for you, Mr. User or Mrs. User, is how you figure out what to do with this.

And I think that leads to the second key task for designers with this sort of unpredictability, which is, that often what people can do with the output of the LLM is just as important as what they can input into it, right? I think we’re starting to see these LLM interfaces mature such that it’s not just spitting out some suggestion that you can copy and paste.

But there were automatically inputting those suggestions into other tools that allow people to manipulate them and change them and move them around. And I think that’s incredibly important because again, it’s not that we’ll ever be able to predict that the LLM’s output is 100 percent useful all the time.

The tools we give people with the output are essential to them making them useful. But you’ve got to step back and think of it differently than just a traditional like form using an algorithm or something like that. It’s a different set of tools. So, I think that’s on the predictability side, on the non deterministic side.

I think the other challenge is that, unlike good computer programs and websites everywhere, LLMs do not explain why they say what they say, right? It can’t give them instructions to give us explanations. And get very good results that make up an explanation for us, but it’s not the same thing as what’s actually going on inside the machine.

In other words, I think it’s very hard to give users an accurate mental model of what’s happening. Well, right. It’s very hard to give people a really realistic understanding of how this thing works. As a result, I think we all just assume it’s a human, right? Our mental model is this is just another creature, another human-like creature I’m interacting with.

You know, that’s not quite right either. So I think we have to, again, think about ways to coach people through creating their own explanations, or at least tests of what an LLM says, right? So I think If you look at Perplexity’s UI, there’s some great examples. This is the sort of search engine powered by LLMs.

They, like ChatGPT, will answer your question in nice paragraphs, but they’ll also suggest related sources and invite users to try to say, okay, if you want to know why we suggested this, maybe you can go look at these other sources, right? I think some of the key design challenges are, okay, how do we prepare users for the fact that these things are useful, but unpredictable, and how do we equip users to create their explanations, or at least test what these things are saying? Because I don’t think we’re ever going to get that out of the LLM neural network itself.

The way to think of it is, it doesn’t really understand in any meaningful way why it did this. But if this Russ guy really wants some rationale, so what would be a reasonable rationale for this kind of thing?

It’s like the kid in eighth grade science class who doesn’t really understand the reasons, but if I can come up with some words that kind of make sense to justify it.

[00:10:14] Chicago Camps: It is considered that LLMs can generate ideas at a pace and volume comparable to skilled designers. How do you see the role of human intuition and creativity evolving as these models become more integrated into the design process?

[00:10:30] Colin MacArthur: I think the first thing we have to do is take a step back and look at what creativity really is, what creative work really is. I think we always assume creative work is the Eureka! moments where you stare at your blank paper and come up with some brilliant idea that you stetch out and execute. And that’s what we call creativity.

But as you and I both know, Russ, that’s not really what creativity looks like in the real world. Creativity is a lot of different kind of sub behaviors. Some of those are very repetitive, right? I think some of the most creative people I know, some of the most thoughtful people I know draft and redraft what they’re writing or designing or shaping.

They go over and over again and they try all of these different elements. I think some of the best designers I’ve worked with have their spaces plastered with different ideas and different alternative versions of, of whatever they’re working on. I think the work of generating those alternative versions, may well be something that we do less or hand off more to, to LLMs and AI over.

I think that piece of creativity, that sort of churn piece, may be lesser. But I think there are lots of other elements of creativity that are going to become even more important. And I mentioned these before. I’ve been thinking a lot about discernment or taste or the ability to detect what’s a good idea and what’s a bad idea and what’s helpful.

And certainly LLMs can make arguments about what they think is a good or bad variation, but the ability to know what a good argument is. For a design, the ability to have a sense, sometimes rationally, sometimes more emotionally for what will solve your problem or resonate with users or resonate with your audience.

I think all of those elements of creativity will become even more important. I think those are harder to automate in some ways. The other thing I’ll just say is that I think it’s important to remember that even when we ask LLMs to do so called creative work, it’s never really totally creative in the sense that we’re often using, right?

They are still ultimately working with a corpus of language and knowing all they know about that language and how all those words are connected, they generate the best guess as to what will meet your desires. But they’re derivative, they work from what they know. And I think there’s still lots of space for human innovation and addition there, right?

I don’t think we need to be scared that LLMs are going to replace our ability to come up with new ideas. I think they’ll probably just make creativity within design a little more about using our intuition to discern what’s good and what needs to be developed more, and less about just simply generating variation.

I think part of that’s just how young these systems are, at least in terms of interface. Computers started with text only interfaces and LLMs have to, and I suspect over time that will become more assisted and scaffolded, but you’re absolutely right. Right now, there is a step of even figuring out how to get it to generate those ideas that are useful to you.

But once you do that, you’re right. Leapfrogging is a good word. I think what’s important to remember is that although most of us use LLMs through websites or apps, they’re not just another website or app. It’s really a whole other evolution of computing, right? And so for desktop computers to become usable and helpful for lots of everyday tasks, many years of deliberate interface design.

And I think it’s clear the same thing will be true for LLMs. I really think we’re going to have to go through lots of figuring out the right patterns, the right approaches to dealing with these things. And for whatever it’s worth, I think that’s one reason to be cautious about the context that we use them in, right?

I think it’s important to use them in places where we can correct what they suggest and places where we’re confident we can catch their mistakes and learn more about not only how they work, but how we should design interfaces for them. So I’m enthusiastic about how they can move design practice forward and leapfrog some steps in our process.

I’m cautious about how we implement them in society because I do think it took us a long time to figure out the right way to use computers and the same thing will be true of LLMs.

I find it remarkable how quick we are to forget that there’s an enormous process and pipeline of people and data that goes into making these things useful and then taking advantage of what they provide and that we have a skill set for dealing with that.

Service design, as you say, but I think often we’re not even dealing with that level of thinking about LLMs right now. We’re just saying, oh my gosh, how do we even apply them to this problem? And maybe we have some interface level issues, but they are ultimately service design problems, a new and tricky one.

And I think that’s another call for designers to be involved in how they’re implemented.

[00:16:12] Chicago Camps: With AI’s unpredictability and the difficulty in making its internal workings transparent, how do you reconcile these aspects with the core design principles of predictability and transparency? And are there ethical considerations that designers need to be more aware of now?

[00:16:31] Colin MacArthur: So maybe we’ll take this more in the ethics direction. I think it’s important to note like the bedrock ethical responsibilities of technology people remain the same. Our work should prevent, reduce, and reverse harm in our societies. I think it’s pretty clear. I don’t think we need to reinvent some new theory of ethics.

I think we just have to think about all of the ways that these systems work. Can be involved in causing or repairing harm on one hand. I think there definitely are some new ways to cause harm advertently and inadvertently with LLM based system. There’s a you don’t have to look very far to find examples of how you can abuse LLMs to produce content in great quantities that spams other people or advances goals in unseemly ways, or how you can also try to extract data that was used to train the LLM.

There’s all sorts of tricky business and ways that the tools can compromise people’s privacy or at the very least overwhelm us with garbage that makes our lives harder. I think there are also examples of how LLMs may help us correct some processes and systems, particularly in government, that have been overwhelmed with information and data for a long time, right?

I have talked to legal advocates, for example, who are using LLMs to search documents on behalf of their clients who are often people disadvantaged in the judicial system. LLMs can extract information from hundreds of pages of documents. And if somebody has a fancy lawyer with a big staff, the staff would do this, but Now, this kind of capability is available to people who are represented by public defenders, right?

And so these kinds of processing abilities can reverse harm too, right? So I think it’s really important to view it as a balanced thing, right? There are obviously these LLMs are opening up all sorts of new societal problems and harm opportunity, and they’re also creating new opportunities for us to reverse bad things.

What I think is hard for people to stop is that this predictability problem I talked about earlier. It’s very hard for any company to sit down and guarantee that their LLM will never say or do something offensive, right? And this is what we really want. We want a company to promise us that nobody will ever be able to abuse this system.

And I, on one hand, think that’s an admirable expectation. On the other hand, I think it may not be very possible. And so I think probably what we have to start doing is thinking not just about how we expect these systems to deliver perfect content, but how we shape people’s expectations around the content, how we train people to use them, and how we also start holding people accountable for the outcomes they produce with the systems instead of just whether they can do something bad or not.

One thing that really strikes me is when governments, particularly in Europe, audit LLM based systems, they look a lot at how well the systems are doing whatever task is given to them, like what’s the success percentage rate or accuracy rate. But very rarely do they step back and say, is this system actually accomplishing the goal that we wanted, right?

Making paperwork processes faster without increasing appeals, doing, Beating the bigger outcome based goal. So I think designers can bring to this conversation, like we do in all technology driven conversations is saying, hold up a minute. If we want to use this thing ethically and responsibly, we have to ask ourselves, not just what’s technically good, but also what’s the real outcome and how do we measure ourselves against that? Right?

And so I think that’s the way that sort of ethical use of these things need to go. Not can we invent some situation where this will do a bad thing, but for this particular task we’re using it for, is it accomplishing the outcome we want?

And is it avoiding outcomes we want to avoid?

[00:20:56] Chicago Camps: Given the significant design changes in the design industry brought on by AI, what changes do you believe should be made to design education to prepare the next generation of designers for these new challenges and opportunities?

[00:21:11] Colin MacArthur: Such an important question.

And I have thought a lot about it preparing my classes here. I’ll say this. The first thing, I still keep design process and research process very much like it was I think 15 or 20 years ago, right? We teach all of the basic skills. And when I initially teach, yeah, things like ideation or research analysis, I’m still having the students use sticky notes and do stuff the manual way, at least a little.

And that’s because I want them to know what it’s like so they can appreciate what’s getting automated. But also because I do think they learn some of the skills of discernment and filtering through doing the manual work a little bit. Right. I think one way that designers. learn, get a sense for what ideas might be useful and not is because they have done a lot of manual sorting through things.

I try to give my students a little bit of hands-on experience. After that, we spend a lot more time hands-on with LLMs than I think most other design classes. And the reason why we do that is, is that right now they’re hard to use. Right? Right now you have to learn a lot of tricks and intuitions about how to make them work.

And the best way to do that seems to be to spend a lot of time fiddling with them and trying to apply them to their work. And so I think that simply pushing learners to spend time with LLMs on design specific tasks is hugely important. I think everybody develops their own prompting style, their own ways that work for them.

And at least at this sort of immature moment for LLMs, giving that time is hugely important. The third thing I will say, which relates to my answer to the previous question, is I still hold them accountable for outcomes, right? So it’s not a valid excuse in my class that the LLM made this mistake, so I’m not responsible for it, right?

We try to teach students early that you’re using this tool, just like you’d use Photoshop or Figma or whatever, and it helps you, but you’re still responsible for the output. So you better pay attention. And I think that gives them a new found caution for exactly how and when they integrate this into their work.

So, before I made that explicit, I’d often have people say, well, you said we could use chat GPT and it gave me all this garbage. And I said, yeah, but you’re still a bit human. And we live in a human system where we give humans accountability. So there you are. And now they’re often thinking, okay, what’s the real place where I could use the LLM and really trust it, where I need to really double check it and where it’s not worth it.

And I think really what you see there is students developing a sense of where it’s a good collaborator for them and their skillset and their strengths and weaknesses. So I think that. the general short term direction design education needs to move in. I think we still need to teach the same basic skills, but with more time to automate them with LLMs, at least parts of them.

And still holding students accountable for the outcomes. So they learn when to trust them and when not. What I’ll just say is, people don’t understand the basic steps and intentions of the design process. There’s no way they can ask ChatGPT useful things about it. And so I think there’s something to be said for how various parts of design have been democratized and diffused.

I think that there is a real danger in mistaking the fact that ChatGPT can do something like a bit of research analysis for you for meaning that it can do your research, right? Or that or even that it should be the primary way you teach. So I agree with you. I don’t know if we’re on the winning side of history here, Russ, but I agree with you.

Event Details
The Designer’s Dilemma: Navigating AI's Impact on Design
April 5, 2024
12:00 pm
April 5, 2024
1:00 pm
Tent Talks Featuring Colin MacArthur The Designer’s Dilemma: Navigating AI’s Impact on Design In this Tent Talks session with Colin MacArthur, we’ll explore the nuanced ways in which Artificial Intelligence, particularly Large Language Models, is reshaping the landscape of design....


May 2024