[00:00:32] Chicago Camps: What are the key differences between traditional UX design leadership and AI UX design leadership, especially in terms of the skills and knowledge required?
[00:00:45] Yaddy Arroyo: I want to preface this by saying, let’s establish some baselines, right? What they both have in common is basically you just have to be a cool human.
You have to care about others. You can’t be self centered. I would almost say like servant leadership. You have to really be externally focused. Also, I would say that you can’t have certain traits. So for example, any of the dark triad traits, like if you’re Machiavellian or you’re evil or you’re psychopathic, you could kick butt metric wise, but your people probably won’t like you because you’re probably like trying to do your own agenda without looking at the big picture.
So if we start with that framework, like solid, just humans, whatever, then we can talk about the subtle differences. So I had to think about this because I’ve had leaders of both camps. that had the technical hard skills. I’ve had good leaders that either had technical hard skills for AI and leaders that haven’t, right?
So I’ve had good examples of both, but I think I’ve just from talking to people and noticing what I like about certain great AI leaders in the UX space, I’m going to say I’ll have something in common in terms of hard skills in order to attract high caliber talent, like I’m talking high level ICs, you do have to have AI experience because high level ICs, unless they’re the ones that want to be the leads.
They’re not going to work for someone that knows less than them. So that’s what it comes down to. And a lot of that comes down to NLU architecture. If you’re a UX leader that values interaction design and visual design only, you’re not going to be a good AI leader. Because a lot of it is literally abstract thinking.
So if you don’t even have a basic respect for the art of content design, don’t even try. You may do awesome AI enhanced products, but you’re not going to do an AI product. Because the AI product itself requires two things. One, it’s like data science, right? But two, the other side is what’s the human side of that, which is the whole like linguistics communication part.
I would say that’s the major difference between great UX leaders and great AI UX leaders, is that a lot of times they’re very similar except that one has actually did the work and got hands on. Now, I want to parlay and say that there’s a lot of nuances because there’s a lot of people getting budgets now.
They may or may not deserve them, I’m just going to be honest with you. They may be good salespeople, but the same thing we encountered in UX, right? There’s a lot of people in UX that were business people. They had no business doing UX, but they were great salespeople.
You can have the same issues. If you give someone a budget, that doesn’t make them a leader. That just makes them someone that has a budget for AI. But what I’ve noticed is that with AI, because of the way it moves, and specifically something you called out was about like, AI is like the next big thing, right?
The same thing with internet of things, the same thing with the internet. I almost feel like AI is just the latest thing, but the people that I admire that are AI leaders now that I love working with and learning from, they just followed the trends. Like when internet of things was popular, they got into it before everybody else had gotten into it. They’re the first people to try new things.
I would also say something that I love in UX leaders: improvisation, the ability to roll with the flow. In traditional UX, you can get away with being a little bit more rigid, but in AI you can’t, like you have to be able to be open, you have to be self aware, you know what I’m saying, like you have to know what you don’t know, and hire for your gaps.
So it is very possible that you have experience, you’re a great AI leader, you’re a great people manager. You’re like giving and selfless, but you also have to be aware enough to know, okay, what am I missing? And what does the team, and also on top of all that, you have to be able to sell it and get the budgets you need.
So you would need everything that you need, like from a traditional UX perspective. And on top of that, you need to know how to negotiate the right strategy for AI UX. Cause the problem now is that there’s a lot of people that think they know it. They may not have had hands on practice. They may not know who to hire.
The problem would be that if you don’t hire the right person in AI UX, you can actually create worse products faster, right? Because AI is all about speed and scale. So that’s why, to me, it’s super important that not only do people have the aptitude, like the technical hard skills, like literally writing in text, literally just like being an abstract thinker, giving research and like writing it’s due respect.
But also to have the attitude of like openness, like almost like the beginner’s mindset, like no micromanagers, by the way, like I’ve worked with plenty of traditional UX leaders who are micromanagers, that’s not going to work in AI, you need to handle it so that you tackle the stuff your team can’t do, which is political and getting the budgets and creating a safe space to create and let your team actually deliver.
And then those are the best people I’ve worked with, the people that like. They’re like, here girl, run. Like just do your thing. And then they give me the support I need. They’re like, Oh, do you need something? We’ll block. Those are the best. So I would say that’s the difference. It’s like impromptu to the max, right?
People that can deal with ambiguity. Cause by the way, you can have a whole AI project shut down and then pivot. We saw that with Google assistant where they like to shut it down because of Bard, right? So even big companies have that same issue. So you have to be able to be okay.
All right. And just know that you don’t know. And be okay with that.
Someone told me, not just a shit umbrella, but I also have a shovel, right? Let’s, you have to make way and make tunnels and not only do that, but cover your people, but yeah, absolutely. And then, oh, the coolest ones are the ones that don’t tell you, the ones that don’t tell you all the stuff that they have to do to make your job easier.
There’s plenty of awesome leaders that I’ve had and specifically AI leaders that I’m learning from that I’m like, Oh, I can never do it. That’s why I’m like, oh man, thank you for helping me and I’m like one day maybe I’ll try to do it. But yeah, it’s intimidating because you see what good leadership feels like.
You realize, you know what? Maybe I don’t have that skillset yet. I need to work on it.
[00:06:22] Chicago Camps: There are ethical implications of AI product design. Can you elaborate on the specific ethical challenges leaders face when overseeing AI projects and how they can navigate those responsibly?
[00:06:34] Yaddy Arroyo: It starts with a diverse team. I’ll just hit it on the head. I think ageism is huge. Ageism and ableism, right? So people forget that like AI has the possibility for improving people’s life. Like just tactically now we have computer vision. We’re able to describe things to blind people. There’s Be My Eyes.
There’s so much technology that people don’t really put together that it’s AI driven. And it’s actually for accessibility and addresses human factor issues. So my point being, you have to hire people that are diverse, even if it’s not in UX, hire a blind developer. We had one in a bank I worked at,.
So it’s really cool to be able to talk to them, not just about their tactical work. But even just running tests by them, Hey, what do you think of this design? What do you think of this prototype? Because that’s what I’m saying, diversity, not so much just with AI UX departments, but just being around a different group, different ages, different types, different abilities of things, different cognitive abilities, different types of disabilities, different types of everything, preferences, lifestyles, the more you expose yourself to different types of people, the less biased the algorithm will become.
Because if you just have one type of person, let’s just say just a bunch of Yaddys, Yaddy and a Russ, right? You’re getting a very biased view of what, basically it’s going to be very centered on what we think and know, right? But we don’t know what we don’t know. So that’s why you hire a bunch of different people and you work with different people, different technical skills, even, right?
We hired a whole bunch of people that didn’t even know AI. And learned it on the job because they had the technical aptitude to do it. It’s a lot of different things that you can do to create that ethical ground. So that’s number one, a diverse environment so that ideas can bloom. Two, I think ethically too, you have to think about user privacy and data.
I work at a bank, so it’s super top of mind. I would say it doesn’t apply as much in my case, because part of our yearly goals are tied to risk. Like what did you do for X? And actually because of the whole Silicon Valley bank thing. All of a sudden, banks are even more, we must address the risk. So, to us, it’s not that much of a big deal or concept to protect user data.
We actually try to use the minimal amount of data needed to do something. And I’ve worked at places that it was basically the same thing. And what I’ve noticed is that it’s the places that are regulated that are very good about that because they have to be or they’ll get sued. You have places that maybe are more social media where they’re like, Oh, people don’t care if we use their data.
And honestly, they don’t. I’ve heard from people, they’re like, teenagers, they’re like, I don’t care. They have my data anyway, that’s the response I get from people. But that still doesn’t mean it’s ethical, right? Just because I can get your data and I conned you into giving me your data doesn’t mean that it’s a good thing.
So you have to look at it like 3d chess. And then that’s just two of 5,000 dimensions, right? Cause then we start getting into like actual AI algorithms, right? And like how those are developed. And ethically if you have a diverse group of people, you can come up with a diverse algorithm that covers different angles and can see different gaps.
But let’s talk about even more digging deep, people stealing people’s papers that those algorithms were based on. So if you’re like building this algorithm on this really awesome concept of paper, but that paper was stolen from someone? That is also unethical and that like ripples through everything.
I would say ethics is really hard to like think about linearly. It’s really just like this, Rubik’s cube of different things you have to consider and have everything played together. I would say from a leadership perspective you have to look at it from a lot of angles, right? Not just from hiring, not just from skillset, not just from environment, but you also have to think through how did we source the data? How did we get the data?
Stuff that’s outside of UX becomes all of a sudden your concerned. So that’s the politically delicate thing that even though you don’t have direct control over certain things that might have ethical implications on your work, you still have to be able to advocate for ethics.
You can look at things big picture and then zoom in when needed. That’s critical, because I think that’s how ethics plays out, because you have to see the big picture in terms of what data comes in, what data comes out, you have to see how the data is captured, all that. And then you have to zoom in tactically about what are we saying, is this accurate? What will this action lead to? If this little thing I’m doing and designing, what will be the ripple effects?
Because what people don’t realize is that if you don’t have metrics in place, and that’s the thing, metrics aren’t only to prove your success, but it’s also to prove any impact. And if you have negative impact, you should be able to measure it and understand what went wrong.
And you know what, no one ever talks about multimodal AI because that’s where that goes into too, right? Like inputs and outputs. And I think that would help with cognitive disabilities as well. Like my son’s special needs. So he’s going to have products that can speak to his cognitive level. So part of me selfishly wants to make sure that we have product designers that think about him.
We keep saying like design to the 90%. But then that’s great. If you design for all people and then you’d be an inclusive design, you actually design for all people. The example is the sidewalk, right? Like the dips that you have in the sidewalk that actually was created, I think for wheelchairs, but now everybody could use them.
People that have mobility issues, people with strollers. it’s an accidental thing that helps everybody. But that’s how I see AI. I see it as a double edged sword that can help a whole bunch of people faster. Or hurt a whole bunch of people faster. And it’s not because of AI.
It’s because of the people creating it. So I think that’s the ethical stance I’m like sitting on. Yeah, we’re the ones creating it. We should be good about creating good products. So it’s a loaded question, but I think that good leaders know how to navigate that. They make it seem easy.
[00:11:48] Chicago Camps: How crucial is it for leaders in AI to have hands on experience with AI product design and development?
[00:11:54] Yaddy Arroyo: Super crucial and crucial. I would say that Everything is nuanced. I don’t want to discourage people and say, if you don’t have this, you can’t be a leader. There are chances you are, you may not be in charge of developing AI from scratch. You may, like I said, do an AI enhanced product that leverages AI, but isn’t 100 percent AI.
I think it’s super crucial because ICs will want to work with someone that has experience. I worked with my boss because she worked on really cool stuff and I planned on learning from her. That’s why I took the job. I’m like, I wouldn’t have taken it otherwise, because why am I going to move for someone who I can do their job? No, that’s just not fair. I would want someone who could do it better than me so that I could see how they do it and then level me up.
I would say it’s super important. If you want, if you’re especially in a mature AI UX space, you may not need that because you may have a leader who set up a really awesome, like AI UX practice, and then they retire, they have to move on.
But the people that are there, and if it’s mature enough, you could possibly get someone that doesn’t require that AI, like, tactical hands on experience, if you already have that smooth running machine. That is a nuanced situation, but typically, when I think about all the leaders I have, yeah, they’ve had experience.
Because I wouldn’t want to work with them. It’s almost like, what do they call it? Like, B players only hire C and D players. Because they want to look better. They want to, that’s my point though. If you are in an organization that has an AI leader that doesn’t have AI experience, and it’s their first time doing it, and they don’t have a second in command that has AI experience, you’re in trouble.
Because what I’ve seen is they sometimes have a second or third in command that fill in the gaps, like the technical gaps. So it’s still being addressed. If anything, look at it like a community. If you have a community of AI UX leaders and between all of them, you have some sort of AI experience, you’re fine.
But if you don’t, if you’re just everybody’s net new, they just have a budget, then you’re probably in trouble and you’re probably going to create a product that doesn’t work right. With typical, like you said, it’s very similar to the beginnings of UX where if you weren’t in UX and you tried to do UX, you weren’t going to do UX, you’re just doing something, eh, UX theater, that’s what you end up doing the same thing in AI.
So the flip side of that is you have people that just joined and they’re not experts and they only have a couple of months. Okay. So when I say experience, legit, like there’s people I know that have 20 years experience in AI because they were in it before it was AI, when it was vectors, when it was like something else.
And it migrates and it changes names, but it’s almost like you can tell the BS artist if you’ve been in it, but the problem is if you’re not in it, you don’t know, right? So that’s what we’re encountering, but I leave it up to the leaderships to filter each other out.
[00:14:30] Chicago Camps: What advice would you give to companies seeking to build a competent AI UX practice? How should they balance the need for new talent with upskilling their existing workforce?
[00:14:39] Yaddy Arroyo: It’s about improv. It’s about people who don’t have a rigid mindset. It’s about people who are curious and want to learn. It’s a lot easier, and I advocate for IC level, individual contributors to join AI because it’s a lot safer for an IC to come in because they’re not going to have as much of a negative impact as a bad leader.
When it comes to leadership, you have to earn it. You have to earn the title of a AI UX. In fact, just earn the title of leader in any field you want, but earn it, right? There’s people that don’t have to earn it. So, that said, companies should really be more. Concerned about hiring really good leadership.
And letting them hire the people, because if you can hire a really good leader, like in the AI UX space, currently, they’ll be able to know which ones are the good people they should hire. And additionally, run them through you, make sure that they’re a good cultural fit and all that. But ideally, good people hire good people because they want to work with better people and create cool stuff.
But, but it’s a, such a complicated matrix because it’s, it’s nuanced, right? If you have a low maturity, like a low AI UX maturity at your company, you’re going to need a leader that not only is really good, But knows how to scale up and has the patience to re educate people over and over and over.
Shelby mentioned something about like in a rapidly evolving space, you have to teach each other. So someone who’s teachable, a leader has to be teachable. So what I recommend is hire a really good leader who can teach, who is willing to hire, who is willing to get along with people and build relationships, build those bridges.
That earns the trust of all the stakeholders. And additionally, like that leader in addition should hire a diverse group of people. Consider ethics and data privacy issues, safety and all that good stuff. And then on top of that too, look for the people that have institutional knowledge at your company that share the traits of the beginners mind that are curious to learn, because you can definitely level up ICs within an org, but they have to have a pliable mindset.
If you have someone who’s rigid, someone who’s, this is the way I do UX and I have to do X, Y, and Z. I don’t want to work with them, right? That’s just straight up. People that are really good at going with the flow don’t want to work with the person they have to sell on. If you find a group of people where you’re just going on this, we’re all going the same direction, all going the same street. It’s going to go a lot faster than a couple of people blocking it, going in the opposite direction and blah, blah, blah. Let’s just go ahead in the same direction.
And even if we don’t know where it is, maybe teach each other along the way. Or, if we can’t learn fast enough, trust that maybe we don’t know everything. Because I’m telling you, the biggest lesson I’ve learned is sometimes, like, you know what, I don’t get it. And that’s okay. I’m just going to do what I’m told until I get it.
And guess what ? It turns out I worked on some really cool stuff that I didn’t even know was AI. Only because they thought I was cool to work with. They’re like, oh yeah, she didn’t give us an obstacle, she just did it, right? I just yes anded whatever idea they gave me. They’re like, we need to do this.
And I’m like, let me ask a couple questions, make sure I understood it. I need to understand the problem. Once they explained it in a way I got it, yeah, not a problem,. I may not understand the technology. And it took me 10 years later to realize I was working on computer vision stuff.
And visual analytics. I was like, I did that stuff? But, but because of the way we were structured, I was working as a subcontractor for a government. They’re not going to give you the whole picture. So you only see the super narrow view. And in hindsight, you put the pieces together. But yeah, to your point, I would advise hire people.
That get along with other people and then want to grow other people. And then you’ll find, they’ll create their own garden, right? You just have to curate that garden before it starts.
[00:18:01] Chicago Camps: Could you discuss the role of natural language understanding in AI UX design? And why it’s essential for leaders to grasp this aspect when managing AI projects?
[00:18:13] Yaddy Arroyo: Oh, I’m in love with that question. The reason is because a lot of times I just go with the flow. I just may not understand stuff right away. And I don’t want to be the jerk that’s, I don’t get it, tell me, explain it to me. So I just pretend to go with the flow, right?
I eventually get it and it makes sense to me and then everything. NLU was one of those things, right? So I’ve been doing AI for a while. It never occurred to me. That the part that I was actually really good in was the whole prompting aspect, right? So if we’re talking about inputs and outputs of AI, right?
Like you just described GPT earlier, how like you have to type certain things and blah, blah, blah. And it’s good for certain things right now, but not good for others. I feel that a lot of times we don’t know what’s under the hood. So what we just see is the tip of the iceberg. There’s a lot of stuff, especially on the business side, that is super cool that consumers just will never see.
So that’s the type of stuff I worked on, but the prompts weren’t like how we see them today. They were different. They were data. Actually, it was a huge data visualization. That’s the type of AI I worked on, like AI SaaS, where we had to use Gen I to recreate these artifacts. We didn’t create them. The system did.
That would explain certain things to certain people, especially the C suites, because they’re all metrics and all that. And this is the caveat. C suites are not data scientists. They are humans, so you have to have very simple graphs that explain a very hard concept, simply.
NLU played a huge part in it, because it’s the prompting, it’s the coding behind it. It’s the algos behind it, and it was me being able to know that I don’t know, and work with, I’m talking, like, extreme programming type stuff. I would literally sit with a developer.
And we would work on it together. And a lot of it is NLU. It’s Natural Language Understanding. It’s Natural Language Processing. And what that basically is, like, how do users, one, talk? How do they communicate and how do they understand? So if you can understand that, then you know what type of inputs they would need.
So that’s why you have a career in prompt engineering now. Because there’s really good people that now understand parameters and how to put them into chat GPT. And it’s this whole science.
But what if it didn’t have to be that complicated? That’s where I come in because all the prompts I do are, are like less about let’s calculate this blah, blah, blah.
It’s more like, how would a human say it? So when I got my like most recent job that I have now, what I’ve learned is NLU is super important. I feel like a dumb that I didn’t realize that before I didn’t connect the dots, but I realized that I have a high respect for content design. Because content design is probably the field closest to NLU without it being specifically NLU.
And a lot of the people that we hired that are the best people are just natural writers. So just knowing how to structure a sentence or how to communicate or emotion, high emotional EQ, by the way. That’s combined, because a writer, a good writer has hearts.
And by the way, it’s not just writing. Content design is more than writing. But my point is It’s foundational. Writing is foundational to AI. And people who only hire visual designers or interaction designers don’t get it. And those are the leaders I’m scared of. Because if you’re not hiring for someone who understands the mind, you’re not hiring the right person to create your product.
Again, you’re not doing an AI product, you’re doing an AI enhanced product, which is totally different and it’s not the same thing.
[00:21:11] Chicago Camps: And now a question from our live studio audience. Mellini asks, can you talk about any special UX considerations for digital human design?
[00:21:21] Yaddy Arroyo: Human factors. We keep talking about all this other stuff, but I think human factors are huge because if you can consider where a human is in the process that they’re doing it then that’s going to make your life easier, right?
For example, if I’m at the bank and there’s a long line, even though it’s a physical thing, the fact that I’m at the bank, they should have technology to know how busy they are. So leverage like maybe some sort of kiosk or some sort of mobile, like robotic teller and by the way, they have this technology available.
When I create use cases and this is huge in AI I feel like this is the one differentiator between me doing traditional UX and me doing AI UX. When I think of use cases, you also have to insert different frames of mind, right?
For example, at a bank, I want to check my transactions, right? That’s a simple use case. What I’ve learned to do is I have to ask deeper questions. What is this person? Why are they doing it? Are they doing it because they had their card stolen and they want to make sure there’s no transactions?
That’s high stakes. That’s gonna be a different, first of all, a different urgency, a different tone. They’re not gonna want this happy go lucky, goofy, let me burn it. You’re gonna want seriousness, right? But if you’re just checking your transactions because you just want to check that you have enough money in the bank, that’s another use case.
Or if you just bought something in Amazon and you want to make sure it got hit, And it’s not a high stakes thing, it’s just following up. That’s different, right? I would say the biggest consideration is whatever use case you come up with and whatever intent use case you come up with in AI, overlay that with mindset.
What is the user going through and how high stakes is it? Like, how urgent is it? And is it something that can wait or something that needs to get done now? Is it self service or is it not?
So that’s how I interpreted what is human design. But I would say that’s like just the major difference that I’ve noticed because I think with regular UX I’ve never heard anyone think of the mindset of the user, right? Maybe service design, but in AI, it’s huge. Like we have to overlay all these different mindsets because the outcome can be different.
[00:23:14] Chicago Camps: And another question from our live studio audience. Veena asks, can you share what resources you are using to continue learning about AI for leadership?
[00:23:24] Yaddy Arroyo: I love Veena. Veena is a friend of the show. She’s great, and she’s in our Slack group. I learn from other people, so it’s definitely other people in the industry, people that I work with. A lot of the work I do hands-on. I keep saying I’m a kinesthetic learner, so that’s why I keep doing whatever I do.
I just do it. I experiment, I jump in. Other ways to learn is literally just going on LinkedIn and following certain people. There’s a lot of people there with a lot of knowledge. There’s a lot of really good people, especially like if you know who my boss is, I’m not going to call her out, but she’s someone I would follow. She doesn’t post a lot, but what she does, she posts about what she attends conferences.
A lot of people learn from her. I would say if you’re in a Slack community, like you are, Veena, in ours, get to know people in there that are professionals that have done it and are doing it. Ask questions. There’s a lot of free courses out there. Even though there’s no courses for AI UX yet.
There’s a couple of us that are working on it. I think they’ll come soon. But meanwhile, learn about AI in general. Learn about data science. Learn about how to splice and dice data. Because what helped me in this field is that I started off in quant research and on the media side from an agency perspective.
I was a media researcher at an agency. So I, yeah, and that was quant. I kept looking at patterns and numbers. That really helped me with my job because not only did I understand what that meant, but I also understood deep learning on a different level. If you can understand the technology itself, even if it’s not related to design, then that’s going to help you.
And the good thing is if you’re like a program minded type of person, if you come from a background of development, then it’s going to even put you in an even better place to learn faster than others. Because Python, if you can learn, that’s a whole nother skill set that not everybody has. I don’t code in Python, but if I did, I’d probably be like unstoppable.
I know my niche. I know my niche. I’m not a coder, but if you can’t code, then do it. It’s only going to make you like learn and it’s going to help you connect with people.
And in the end, AI is there to serve us. As much as I’m afraid of our AI overlords taking over, it really is about what can AI do for us that. That we could do for each other? What can we do faster with ai?
This is the worst ai we’ll see, I hope. I think AI is only gonna get better in theory, and that’s if you have good people leading the charge that think about humans first. I would say to learn, just get to know people and then see how they use AI and then also like how they work on AI products.