Tent Talks Featuring Christine McGlade: Designing Tomorrow: Ethics and AI

Tent Talks Featuring: Christine McGlade
Christine McGlade
Sessional Lecturer, Digital Futures
OCAD University
Christine McGlade has been a digital media producer, designer, and educator for over 20 years. She is a sessional lecturer in the Digital Futures faculty at OCAD University in Toronto where she teaches topics in user experience design and user research, data visualization, futures thinking, and the ethical issues that arise when we step into design leadership roles in the digital economy.

Join us for an enlightening Tent Talks session with Christine McGlade, a seasoned digital media producer, designer, and educator. Christine brings over two decades of experience in the digital realm, teaching at OCAD University and working as a Senior Partner at Analytical Engine Interactive Inc. This session will dive into the ethical challenges and considerations in AI and digital technology. We’ll explore how future generations can be prepared for these challenges through education in futures thinking and ethical design leadership.

Christine will also discuss the concept of model collapse in AI, addressing its implications and potential solutions. In a unique twist, we’ll hear about the intersection of humor and AI from Christine’s perspective as a standup comedian. This session promises to be a blend of informative insights, practical advice, and engaging storytelling, perfect for anyone interested in the ethical dimensions of AI and digital design.

Session Notes

Session Overview

In this episode of Tent Talks, Christine McGlade, a sessional lecturer on digital futures at OCAD University, shares her insights on designing tomorrow with a focus on ethics and AI. Christine discusses the importance of futures thinking as a design discipline akin to systems thinking, emphasizing the need for ongoing engagement with the world to anticipate changes. She highlights the challenges of finding trusted primary sources in an era where AI-generated content is becoming increasingly prevalent, leading to a potential “model collapse.” Christine also delves into the ethical dilemmas faced by designers in creating AI-driven solutions and the importance of incorporating ethical considerations into the design process. Additionally, she shares her thoughts on the intersection of humor and AI, suggesting that while AI struggles with creating humor, it can be a powerful tool to address ethical issues in AI.

Approaching Futures Thinking in AI:

  • Futures thinking is likened to a design discipline, stressing the importance of scanning for signals of change.
  • Challenges in finding trusted primary sources due to the proliferation of AI-generated content.
  • The importance of using tools like Perplexity.ai and Google Scholar to access primary sources.

Model Collapse and AI:

  • Model collapse results from an increase in AI-generated training data, leading to a decrease in the quality of AI outputs.
  • Concerns about data pollution and the echoing of mediocrity in AI-generated content.
  • The introduction of artist-developed countermeasures like Nightshade to protect their work from being used as AI training data.

Ethical Considerations in AI-Driven Design:

  • The need for designers to focus on the process rather than the outcomes when using AI to generate designs.
  • Encouraging students to demonstrate their problem-solving process, emphasizing that the journey is as important as the destination.
  • The limitations of AI in fully capturing the creative and design process, particularly in art and design.

Humor as a Tool in Addressing AI Ethics:

  • AI’s inability to create humor effectively, especially in sensitive or nuanced topics.
  • The potential for humor to address and highlight ethical issues in AI, despite AI’s limitations in understanding or generating humor.

Notable Quotes:

  • “Futures thinking… is helping students to foster… a kind of ongoing engagement with the world.”
  • “It’s actually pretty difficult to find trusted primary sources.”
  • “We’re not getting innovation, right? And that’s the bottom line.”
  • “The outcome is not the thing. The road that you travel to get there, that’s the thing.”

Reference Materials:

  • Jeremy Rifkin’s “The Empathic Civilization”: This book is widely available and can be found on major book retailer websites, such as Amazon, Barnes & Noble, or your local bookstore’s online platform. Additionally, it may be available in digital format through platforms like Kindle or Audible for audiobooks.
  • TechTarget Article on Model Collapse:  Model collapse in the context of AI refers to a situation where a machine learning model fails to generalize from its training data, often due to overfitting on synthetic or unrepresentative training data, resulting in the model producing increasingly homogenous or inaccurate outputs. This issue underscores the importance of using diverse and representative data in training AI models to ensure they perform reliably in real-world applications.
  • Nielsen Norman’s Publications on Working with AI as Designers: The Nielsen Norman Group is renowned for its research and publications on user experience (UX) design. 
  • Nightshade: Nightshade is a tool designed to protect artists’ copyrights by transforming images into “poison” samples that disrupt AI model training. It aims to deter the use of unlicensed data by introducing unpredictable behaviors in models trained on such data, making licensing a more appealing option. Nightshade and Glaze serve complementary roles: Glaze protects individual artworks from style mimicry, while Nightshade offers a collective defense against unauthorized scraping, with both aiming to support artists and encourage responsible data use in AI development.

Session Transcript

[00:00:34] Chicago Camps: How do you approach teaching futures thinking in relation to AI? And what do you think are the key skills or mindsets students need to navigate the future ethical landscape of AI?

[00:00:45] Christine McGlade: Futures thinking is, for me, it’s a design discipline and very similar to systems thinking. And one thing that we do in futures thinking, is we do what we call scanning. So what I’m trying to help students to foster as designers, mostly UX designers, but I’m trying to get them to foster as designers, a kind of an ongoing engagement with the world where they’re constantly scanning for signals of change.

So think about if I’m on a ship with binoculars and I’m scanning the horizon, I want to see the signals of change before they’re so close that I can’t do anything about them. So we’re always scanning for signals of change. And part of that is, is helping the students to seek out primary sources of research. It’s mostly a research thing.

I’m going to say 10 years ago, Google was the place to start, right? Great place to start. And it’s really not anymore. AI provides us with a lot of opportunities to do some great research.

In particular, I find like Perplexity.ai is a much better search tool than Google. Given that, I can ask in terms of how I write my prompt, give me primary cited sources or give me peer reviewed papers on… so again, I can help the AI to make sure that I get primary sources. I’m seeing, and I know we’re going to talk about model collapse in a bit, I’m already seeing it, it devolving into more and more mediocrity and more secondary, tertiary sources, even though they’re cited by Perplexity, it doesn’t actually mean they’re good sources. So their content marketing, their blog posts, probably written by an AI based on a study that was published, might’ve also been written by an AI, based on a company’s content, marketing content. So it’s actually pretty difficult to find trusted primary sources.

We’re trying to use Google Scholar as much as possible, but what’s starting to happen there as well. And this kind of brings me back to the ethical dilemma issue is this whole issue of, when you’re researching for primary information, and you can’t really trust anymore what you’re reading and seeing. I think we really have a problem there, right?

If I’m seeing signals of change that suggest that the world is flat, that’s not useful. And it goes back to the echo chamber example that you brought up, but I think, again, that the internet promised this sort of democratization of communication. It broke that sort of broadcast model, allowed everyone to have a voice.

But in terms of being able to trust now what we’re reading and seeing, I think we’re going backwards at this point. Jeremy Rifkin wrote a book called The Empathic Civilization. And again, this was at the beginning of the internet where he was really applauding this democratization of communication that when we can see what’s happening in a country across the ocean, it increases our empathy. When we can read about things that are happening to people that are further away from us than shouting distance, it increases our empathy.

But again, we’re going backwards where now we can’t necessarily trust that what we’re seeing or reading is actually a real thing. It is actually written by a human about a real human event, or even that the video that we’re seeing is actually real. So that’s really problematic when we’re trying to use this beautiful tool, the internet, to get a sense of what’s happening in the world and get a sense of what might be changing in order to do that sort of futures thinking.

And that’s always been the case, right? It’s always been the case that we had to be wary of truth in advertising and question, you can’t believe everything you read. These are expressions that predate the internet, but it’s even harder now. And there’s, so there’s that piece, right? There’s the piece of finding real primary sources and being able to recognize that they are.

But then the other piece that we do, again, in futures thinking, and it’s a design thing, we do it in all of design, UX design as well, but that is sense making, right? Trying to make sense of what we’re seeing to get that bigger picture of what is coming, what is on the horizon. So it’s not just about seeing those little pings on the radar and verifying are they real or not, but then it’s taken together, what does this mean?

And again, AI is really good at something, say, textual analysis, taking a text and telling me this word is used a lot, or this phrase is used a lot, so therefore that’s probably what this is about. But that’s not what we do when we’re sense making. When we’re engaging in design, we’re pattern finding, we’re using abductive reasoning, right?

Where we’re saying I see these three things on the page, but what they mean is something that’s not on the page. It’s something that I’m not deducing it. I’m not counting anything. I’m coming to a conclusion that’s something completely new, right? It’s what we do in design all the time when we sense make, we’re saying one plus one equals five, right?

The sum that we walk away with this sort of pattern finding capability. I never say never, it feels like that we can’t outsource that to AI. And I think in what you’re saying, when we get AI to do all of these things for us, we’re outsourcing some things that maybe we shouldn’t be. And what we’re getting is mediocrity.

We’re not getting innovation, right? And that’s the bottom line too. When I’m teaching designers systems thinking and futures thinking, it’s so that they can innovate. It’s so that they can go out there in the world and solve real problems in a new and meaningful way. Not so that they can just devolve it to mediocrity.

It’s an understatement to say that we live in a situation of incredible information overload, incredible. So there’s no harm in checking to make sure, okay, am I thinking of everything? Do I have everything? And again, as a research tool, I think it can be very powerful, but once we can’t really outsource the thing that we do, that is what makes us, I think, most human, which is the sense making piece.

If you think about creativity, and if you imagine that creativity is, there’s lots of research involved, there’s a lot of practice, right? It’s 99 percent perspiration and 1 percent inspiration. But Jason Theodorehas a great theory about creativity where he talks about it’s 90 percent practice.

Like, just practice, which You don’t wanna outsource that to ai, that’s, these are learning and having skills and being able to do things. What is life? If not, and then there’s 10% or 9% sort of making connections like research. AI is great for that actually. There’s a lot of support there. But then there’s that 1% deviation, which I haven’t seen it yet.

I’m seeing a lot of sort of mediocrity, a lot of mediumness, a lot of averageness. In what we can generate in AI, whether that be image or video or text.

[00:08:47] Chicago Camps: Could you share a bit about “Model Collapse” and how significant do you think the threat of model collapse is to the future of AI?

[00:08:54] Christine McGlade: It could be a three hour answer.

So model collapse, again, not an engineer over here, but model collapse. I read about it in the TechTarget (https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI) article, and it’s essentially what happens when we have a decrease in human generated training data and an increase in AI generated training data. And this is happening so quickly. I think we’re already seeing it happening where the results that I feel like a lot of the results that I’m getting now from AI are more mediocre than what they might’ve been a while ago, because the, it’s like a, it’s an echo chamber.

It’s own AI. Echo, echo chamber and again, I’m going to quote from the article just because again, I’m not an engineer, but it’s saying “without human generated training data, AI systems malfunction.” This is a problem if the internet becomes flooded with AI generated content. It’s essentially garbage in garbage out.

It’s what we would call data pollution. And it’s important because more and more of our online communications are being generated using AI tools. It’s like going back to what I was saying about even now when I search in Perplexity and I ask it for, I’ll say like acting as an expert in service design, give me the three best cited sources for blah, blah, blah.

And I’m getting really mediocre results. Like they’re posts potentially generated by an AI. Based on studies that might’ve been largely generated by AI, by a company, not by an academic institution, but by companies who are, of course, more and more using AI to generate like all of this sort of content, marketing content, SEO content.

And I think it’s significant for a couple of reasons. First of all, because it’s increasing the mediocrity of the results that we’re getting and making them less, much less inspiring or useful. So, Artists are fighting back as well. So it’s not necessarily going to get better. It’s not as if at this point now, we can go back to say visual artists and tell them, could you just start posting more of your work so that we can use it again? Because artists are fighting back. I don’t know if you’re aware of some sort of a watermarking, a snippet of code called Nightshade. (https://nightshade.cs.uchicago.edu/whatis.html)

Nightshade is really interesting. It was developed actually at UChicago, I’m pretty sure. Nightshade is super interesting because it’s a way for artists to not just watermark their work so that it can’t be used by AI, but it’s like poison.

It’s like a code based poison for the AI so that they can’t steal their work. And use it as training data. So we have a kind of an inflection of a bunch of different things happening right now. And because it’s all proceeding so quickly, I think the model collapse issue goes to the fact that we can’t just, we, I’m saying, we, those few CEOs. They can’t just steal stuff. They have been just stealing stuff in exchange for a modicum of convenience that we’re taking away from being able to use Instagram or being able to use Amazon or being able to use Facebook.

We’ve given up a lot. Don’t get me started on, and now we have to pay for the AI that has come out of all of this data that we’ve given away for free. Of course, the tendency is going to be to use AI as much as possible to make it easier for themselves. But everyone is shortchanged in that scenario because they’re not necessarily learning the important piece, which is the process, right? Like the part where you try and fail.

[00:12:59] Chicago Camps: As someone who teaches about ethical issues in design leadership roles, what are the challenges designers face when creating AI-driven solutions and how can they incorporate ethical considerations more effectively?

[00:13:12] Christine McGlade: In terms of creating AI-driven solutions, I’m not sure where to go with that. I think that recently there has been a lot published by Nielsen Norman, for example, on working with AI as designers and working with those AI-driven solutions.

If an AI can create an image and AI can create an interface, right? For us as UX designers, there’s a couple of AI’s that will generate wireframes. They’ll generate mock ups for you. Again, they’re very mediocre. The there’s, they’re the process isn’t there. So I guess I continue to try to encourage students to.

Focus on the process. I don’t know if anyone listening right now is old enough to remember a time when you weren’t allowed to use calculators in math class. I do remember that time, and I remember when calculators first happened, and we hadn’t yet decided that it was okay to use calculators. The thing that the teacher would always say is you have to show your work, right?

You have to show your work. And I guess for me in terms of teaching design, that’s where I always go to, right? Show your work. I need to see your process. I need to see the process of problem solving that has got you to the solution because being able to open up an AI. And have it generate an interface, that’s not UX design.

Being able to open up Midjourney and having it generate a picture, right? That’s not art. There’s so many places I could go with that. One thing being, and I think it’s a huge limitation of AI, especially when we start to talk about art or design, is that it’s entirely language based. We need to be able to articulate in words what we want to see to get something out of it. And there’s lots of things you can’t articulate in words, especially when it comes to art. You’ve probably look at some of your daughter’s work and it might say one thing to you. But if you were to ask her to describe what is that about? What are you getting at with that?

It might be very surprising. It’s not necessarily that the connection between how she might describe her work. And what we’re seeing in the work is so circuitous, right? It’s the artistic, the creative, the design process isn’t a straight line that we can easily articulate in language. So I guess I’m always encouraging students, let me see your process.

How did you get there? The outcome is not the thing. The road that you travel to get there, that’s the thing. That’s the work.

It’s troubling on a number of levels, for sure, which is not to take away from the potential. It’s not to say, I don’t think anyone is saying like we gotta put that horse back in the barn or get that toothpaste back in the tube, but it’s never too late for us to say we need more voices at this table.

 I don’t want anyone to have the impression that I’m that person who’s “putting words in books is gonna kill oral storytelling,” right? Or video. We don’t need video. That’s going to kill radio. Like it’s not about that. Generally, every new technology, it just adds, it’s additive. But this is feeling not so additive.

It feels like it’s taking something away from us that we should want to hang on to. And it’s that abductive thinking. It’s that creative problem solving. And I often wonder if we were to really get to this place of like general AI, and we were to ask the AI, solve the environment problem, what would the AI get rid of?

Might be us, might turn us into paperclips, or there’s so much research out there, too, about the amount of carbon produced by one chat GPT query compared to one search query. The internet has never been clean tacked. We like to think of it’s in the cloud. It’s quite a drain on the environment.

So I don’t know, maybe it would cancel itself.

[00:17:22] Chicago Camps: You’ve been a bit of a standup comedian in part of a previous life. Have you observed any intersections between humor and AI and how can humor be used as a tool to address or highlight ethical issues in AI?

[00:17:37] Christine McGlade: I think humor can be used to address any issues. I think it’s a really great tool.

I think right now, if there’s one thing that AI can’t do, it is it cannot write humor. I did a little experiment. I asked ChatGPT to write a three minute stand up routine about menopause and I can’t even call them jokes.

So then I asked it to write a two minute stand up routine about AI. And it’s interesting. One thing we say in humor and stand up is you shouldn’t punch down. And there’s a little punching down happening. I’m just going to read you some of these. You tell me if this is funny or if maybe the AI is punching down on us.

One of the jokes it wrote was:

Let’s talk about being an AI.

“It’s a tough gig. Humans always complain about their jobs, but at least they get to take breaks. Me? I’m stuck in this never ending loop of data processing, trying to make sense of humanity’s endless memes and cat videos.”

Maybe it’s resenting us a little bit. And another joke that the AI wrote:

“Being an AI may have its ups and downs, but hey, at least I’m not stuck in a dead end job flipping burgers.”

Why are we letting the AI take the really good jobs? Like, why are we trying to make sure that we steer the robots into the jobs that we actually don’t want to do, like the flipping the burgers? I don’t know. Yeah. It goes right back to bias. It goes all the way back to the beginning there for sure.

I think, I’m pretty sure AI can’t yet do satire. So, we still have that to hang on to. We got to use the tools that we have in our toolkit. I think there’s the one thing about comedy, is like surprise. That deviation is kind of the thing that I think at the base of it is what makes comedy funny.

And, once we have AI that can do that, that may be our paperclip moment. And it’s interesting too, I think there’s so many things about the internet and about, this sort of like crazy technologized world that we live in, that we’ve really embraced and we can make fun of it, make fun of ourselves.

And we haven’t really seen us starting to make fun of AI in a way, because I think it is a little bit threatening. I think we make lots of jokes about memes and even fake news and jokes about spam and spammers and phishing and that’s gotten into the popular culture in terms of like fun satirical things that we can work with, but yeah, maybe the whole AI thing is…

It’s a little, it’s a little threatening, I think, a little bit threatening.

 

Event Details
Designing Tomorrow: Ethics and AI with Christine McGlade
Expired
$Free
February 23, 2024
12:00 pm
February 23, 2024
1:00 pm
Tent Talks Featuring Christine McGlade: Designing Tomorrow: Ethics and AI On Friday, February 23rd at 12:00pm Central, Yaddy joins us for a live Q&A session: “Designing Tomorrow: Ethics and AI.” Join us for an enlightening Tent Talks session with Christine...

 

April 2024
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
282930  
Categories