
WARP Speed Leadership
A podcast about everything you need to know to be an incredible leader in this rapidly evolving world of work.
We talk directly to the leaders and experts shaping the new world of work. Every episode we’ll unpack one major trend, to provide practical insights, for you to stay ahead, and empower your teams to do their best work.
WARP Speed Leadership
AI Agents - what are they, can they make me superhuman, and are they after my job??
In this episode of WARP Speed Leadership, host Richard Parton along with co-hosts Nikki Tugano and Darryl Wright delve into the world of AI agents. They discuss what differentiates AI agents from traditional chatbots, their potential benefits, and the challenges organizations face in adopting these technologies. From enabling more time for human-centric skills like creativity and empathy to the risks of unintended outcomes and ethical biases, the conversation covers a wide range of topics relevant to modern leadership. The episode also highlights practical strategies for integrating AI into the workplace, including the importance of maintaining human control, revising workflows, and fostering a learning environment to maximize the benefits of AI technology while minimizing its risks.
Links & Resources
Here are the resources and further reading mentioned in this episode:
- Henrik Kniberg (or Nyberg): Insights on prompt engineering and AI transformation.
- MIT Sloan AI Adoption Research: Studies on the impact of purpose and employee considerations in AI adoption.
- Ben Schneiderman - Human-Centered AI: Framework and book (Human Centered AI, 2022) focused on keeping humans in control.
- https://hcil.umd.edu/human-centered-ai/ (Author/Framework Overview)
- https://issues.org/wp-content/uploads/2021/01/56%E2%80%9361-Shneiderman-Human-Centered-AI-Winter-2021.pdf (Referenced Paper)
- Center for Effective Organizations (CEO): Work on models for human-AI collaboration: https://ceo.usc.edu/
- AI Ethics Impact Group (Z-Inspection®): Tools and guidelines for managing AI risks and ethical assessments: https://z-inspection.org
- World Economic Forum - Future of Jobs Report 2025: Insights into skills that complement AI capabilities:
https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf - Josh Bersin: Writings and strategies for upskilling teams alongside AI.
- Rob Baker - Job Crafting Canvas: Tool for redesigning roles -
Episode 1: https://www.buzzsprout.com/2328233/episodes/15143038
Canvas: https://tailoredthinking.co.uk/job-crafting-canvas - Google's People + AI Guidebook: Practical templates for developing and monitoring AI: https://pair.withgoogle.com/guidebook/
Hi, and welcome to WARP Speed Leadership., a podcast about everything you need to know to be an incredible leader and enable your team to thrive in this hyper fast new world of work. I'm your host, Richard Parton, and in this episode we are talking about AI agents. We're hearing from so many corners that it's the year of the AI agent so we are gonna talk about the upsides and what to look out for. We are coming to you from UR Land in Melbourne, Australia, and today I'm joined by my two co-hosts, Nikki Ano, who is the founder and CEO of scene culture, the Moneyball for Work Teams and Darryl Wright. He's a director at Orner and he's also an organizational agility coach and trainer. So, Darrell, it's great to have you back. Uh, do you wanna kick us off? What is an AI agent? And given most of them are based on the same large language model foundations as, uh, many of the other chat bots, what actually makes them different?
Darryl:Thanks Richard. Well, it's great to be here everyone Uh, I really enjoyed my first Wart Speed Leadership podcast last time, so great to be back again. um, yeah. In thinking about a AI Agents, often the first thing that comes up for people is asking, well, what even is an a AI agent? I. And so to my mind some of the key things that differentiate it from just using chat GPT or something like that, um, is that, an AI agent has the ability to act autonomously. that. is, that we can give it guidelines or instructions or whatever, and then it can go out and it can be doing that. Uh, and it might not come back to us
Richard:We might not come back to
Darryl:might
Richard:or
Darryl:in on it
Richard:we might not
Darryl:or a day or a week, and it's still doing stuff, uh, as opposed to asking questions and chat. GPT comes back with an answer. Um, the second thing is that it can interact with more than just one person or more than just the person prompting. Or even if I create my own custom GPT, I can share it with someone else and then they can ask it questions too. But it still just interacts with the person asking the questions. So an AI agent can interact with more people, more systems, uh, than just the person asking it. And then the third thing is. It's out in the doing stuff. Uh, and that could mean that it's receiving and sending emails. It could be, it's it's posting on social media. It could be, it's actually, uh, you know, in with real people in other ways. So those, those are kind of the three things that.
Richard:So that.
Darryl:delineate what is the difference between an AI agent and just using a chat LLM.
Richard:Mm, nice. I I found it really difficult to, to know exactly how to delineate between the two. So I think it's quite useful to have that sort of differentiation. One of the things that I've been using recently is in in chat, GPT I've got a, what's called a custom GPT and it can, I, I've Configured it to be able to go and pull in my to-do lists. And it can, it can do that. You know, I've got an API that I've learned how to connect through to my to-do list. But what it can't do is actually then. I. It pulls in my, my list, It can pull in my calendar. I can have a chat with it and I, and it, and I've told it about how I like to prioritize. And that's immensely useful, but it isn't able to then go out and take any actions. And so for me, you know, as per what you've just described, there, that's not quite an agent and it's not autonomous. It's, it's, it's only having a discussion with me. I'm excited about that, that next step about what, You know, what might be the opportunities out there. Nikki, in terms of, what you are seeing in your work, what are the things that let's go to the upside first. What are the, what are the exciting things that you are hearing about or seeing, or maybe some of your clients are talking to you about in, in seeing culture world?
Nikki:One of the things that I see would be like is one of the great advantages of adopting AI agents is obviously the capacity that we have now for these more like human oriented skills and capabilities, such as creativity or empathy or critical thinking et cetera and, and our ability to kind of like put a lot more time and attention and energy into that space. And that actually now being, considered more high priority I look at a lot of lists around the future of skills. And a lot of people talk about the importance of understanding technology and knowing those sorts of things. And I'm like, I don't think it's about that. if you actually look at what matters, it's related to the human parts because the technology can take care of the rest. So that's one thing that I'm excited about, I would hope to see investment in you know, developing leadership capability around yeah, like I mentioned, empathy and creativity and critical thinking and those sorts of things.
Richard:Yeah, So related to the human aspect, I think that's one of the things that's quite interesting at the moment is I. that blurring of the boundaries of what are the human skills? Like how do you think about that in, in terms of. What that human added element is that we should maybe be trying to focus more on.
Nikki:we're very much judged on our character and our reputation and, much of that is derived by, you know, the ways in which we show up and how we make people feel ultimately, right? and so, and that's something that I think is uniquely, Like a human experience like how you've left someone feeling and then what you do with that Or where that goes Um,'cause it's that it's that emotional connection that we as people have that you can't have with an operating system or an LLM or an AI agent right Um, and it's that emotional connection as we know that is such a key factor in what essentially drives our behavior, what drives our action, you know, as, as much as we like to think that we're rational creatures for the most part, we're actually irrational. And we get really compelled by, you know, someone that we have a good relationship with to be able to do work with them instead of someone That perhaps we don't have a good relationship with, maybe they're an excellent executor, but because we just wanna enjoy our work and, and I like how that connects to the theme of our first conversation from last time around fun at work, you know, that's what it comes down to. And I don't think an AI agent is as fun as people.
Richard:I love it. Slowing down the the gauntlet there to open AI, if you're listening, make it more fun. So, Darrell, with your hat on uh, somebody who is often working on helping workplaces to be more human and function better what are you seeing?
Darryl:Um, I think one of the things that's really exciting that's coming is that we are going to change the way we work specifically one of the immediate changes that I see in terms of team size, so for those of us who work with. You know, newer, ways of working have been strong proponents of, you know, a two pizza team, five to 10 people or something. I think that's gonna change radically. I think that in the future, teams will be one or two humans and maybe two or three AIs you know, very small team will be able to far out. Perform and be more productive than what would previously have been a bigger team. And I think that's really exciting. I'm just imagining this idea of having, you know, one or two humans and, and a few AI buddies, and together we can like, achieve amazing things. Um, and, and that's gonna take a mindset shift for people to, change the way they think about. You know, big, big projects. Well, maybe it's not, maybe it's like just one or two people that know how to work with a few AIs
Richard:Yeah, and and you've, you've recently had some opportunity to hang out with quite a thought leader in this space Uh, he Nyberg, Do you wanna describe a little bit of what you saw from what he's been doing and the platform that he's setting
Darryl:So I'm sure many listeners would be familiar with Henrik Berg, you know, a very famous agile coach from the Spotify days, Lego and many other things he ran and he still runs a prompt engineering course, which was, amazing, and super eye opening. Uh, some of the things he was talking about was large scale organizations that are looking at, in the previous, decades, they would've done and Agile transformation. Um, increasingly now organizations are looking at, well, what does an AI transformation look like for their organization? Right? How do they, you know, take an enterprise wide approach to ai? and his big guidance was, in the future. This technology will be so ubiquitous that it really won't matter who you go with or what level of plan you're on. But right now you get
Richard:now.
Darryl:for. if you are in an organization who are looking at doing, widespread AI adoption across the organization, one of the things you really wanna think about is get access to good quality AI platforms for your organization. if you skimp, it's likely to not. Give you the value, that you're looking for. but also start to think at a senior leadership level, strategic level. where is your organization, you know, heading with respect to ai, adoption, how are you gonna use it? How are you gonna incorporate it, to get. results, like insane productivity. that's what the future of AI means in, in the positive sense, in the exciting sense, is insane productivity, five times faster, 10 times faster, a hundred times faster, you know, to be able to do amazing, exciting things.
Richard:Yeah. So, Nikki, going now to that negative side, given your expertise around how to make. Organizations work well in terms of how people and teams are formulated and creating, a better employee experience. There are a bunch of challenges. Yeah, yeah, Go. go ahead. Yeah.
Nikki:until it's properly declared that that something has been implemented, there is a sense of distrust as to where, like, where something is coming from, you know, has, Has someone else crafted this message to me, for instance, or has that actually come from the human being? and And as a result, what does that mean for my judgment on how much I trust it? How much I trust that output, right? so what this means is we're still in this gray period of accepting it for one but two as well know, one of the things that I certainly ask for from my team is like, I wanna know what your opinion is on this, this and this, I don't want you to run a prompt and then tell me what it spits out. I wanna know what you think, what you uniquely think and how that contributes and, Uh, Although it feels easier. It's like we rely on it. We can rely on it a little bit too much sometimes, um, to, to synthesize a whole big data set. And I'm like, but the reason why I have people is so that I can use your brain, right? And we just get, tech to, to automate all of the other things. and where I think this raises a bit of a problem is, you know, once you do start utilizing. AI for, uh, routine tasks and things, what happens is it's obviously it's a machine like it, it just keeps going. It doesn't get tired. So its ability to execute is incredible, but such that it surpasses our ability to upskill in order to basically bridge the gap that we now have. So we now have this new capacity, however. we haven't been able to learn as quickly as our A AI agent has been able to execute how we can course correct it when it gets new, information or we don't know how to course correct it when it's, delivered an outcome that we didn't expect And this is where some things like, bias and eth ethical kind of risks, can surface. Because we don't know what we don't know until it's tested. But if we test it almost too much, and we don't know the magnitude of how far it goes, right, because it's operating autonomously, it's really, it's really tricky. And so, this is where we need to be very intentional about what, like, know, what biases might be built into the way that we've designed specific AI agents such that it's not inadvertently, you know producing unintended outcomes that we didn't, you know, that we didn't want for anyone.
Richard:Yeah, absolutely. And what's, what's really interesting about the, the elements that you were talking about there is they, they're all to do with the, the things that we don't know the, the degree to which we don't actually have a good handle on AI capabilities on. I, If I'm, if I ha, have team members who are I? Overlying on on, on, AI, how do I make sure that I I'm actually making it clear to them this is, these are the things that I really value from you. You know, a lot of organizations, well, I even, I would say. would struggle to kind of really delineate like this is what I actually really want from you as a human and that that you're abso I can absolutely see that becoming much, much more of a interesting challenge uh, over over the next few years.
Nikki:there was just one other
Richard:yeah.
Nikki:to add that you made me think about in, in the midst of all of that,'cause obviously one of the one of the use cases is uh, its ability to produce, you know, various types of communication for us. And so if we wanted to put out some sort of social media posts, we can get something
Richard:Mm-hmm.
Nikki:And, And then the audience is is of course thinking is that person actually written it themselves or have they used, you know, and, and then it's it's, it's having a, it's having an impact on that person's reputation and, and then suddenly you have. AI agents that are impacting, you know, your identity, your brand, your, your, your relationships with other people. And you don't know, right. uh, And so this is where I think there is a little bit of a risk and, and I still do proj, like write my own posts. However, I, I see a lot of benefit in being able to save time if I used, and I've experimented with what that output might look like, but yeah, this, I'm still. still on the fence about it, utilizing it for that purpose anyway.
Richard:So I, I darl on the spot a little bit here because Darl also has like A quite a strong technical background. And one of the things that you you were talking about there, Nikki, that, that I found really interesting in some of the early stuff that Henrik Berg, put out around Was uh, some some mini case studies that he he'd worked on uh, using uh, a, a very simplistic a AI approach for uh, sifting through uh, candidates in recruitment, and um. I I generally uh, am Henrik Berg fanboy but when I heard that I thought oh lord that is so that's so terrible there are so many biases that you just wouldn't notice in that kind of approach and you know What Nikki was getting at, there was like, having controls in place is is actually, you you can't, you know, that's the, that's essentially the unknown side. It's the, we can, we can put these things in place, but how do we actually balance them with. Making sure that we are not getting those unintended consequences. So I'd, I'd love to hear you, you thinking about that, sort of, how do we test for things? And also the other things that, that, you know, are worrying to, to, to you around the, the AI agent explosion that's apparently about to it is all.
Darryl:so, so many thoughts, Richard. uh, A few things in no particular order. Firstly, uh, we know that 70 of human communication is nonverbal. Uh, And so I know in the future we're going to be much more natural language oriented and so on with how we interact with AIs. But right now, the majority of it is still text-based, which means it misses that 70 of communication. So the opportunities for misunderstandings are much, much higher. So we need to be very of that going forward. Um, the second thing this kind of, you know, relates to what you were just saying, Richard, is the fact of the matter is that the majority of AI has been trained on general internet. which means it inherits the ethics and values of internet and. You know, that might always be what we want. Um, the, the is, you I'm, one of the big cautionary tales, I think, and Nikki touched on this already, is that, uh, you know, ideally we want the ais to automate doing the boring work so we can do the fun and creative work, and we wanna make sure that people don't into the trap of having it be the other way around where we end up doing the boring work and Theis do the fun, creative stuff, right. So that's a The to watch out for. Um, the next thing is, uh, and it, and it's of something, you know, it, it's the old adage of just because we can, doesn't necessarily mean we should, right? So we could set up an AI agent right now with existing technology today, okay? That can monitor incoming emails. And from the context of that email, it can respond in an email to a real life customer out in the world and give them a quote. For a price for something, right? They make an inquiry, it sends them a price. And unintended consequence you were just talking about is that that AI might negotiate and it might give them a price, which isn't what we agreed. It wasn't in the price sheet, but it's doing its best to be helpful, and it sends a You now have an in writing price, It is a, it is a legal quote, you are legally bound by it, you necessarily might not have had any control over it, and you've got a customer saying you need to honor this price now. Right? So we need to be careful about the things that we're doing just because the capability exists doesn't necessarily mean we want them to do that. And we have real examples where that's happened. And then the last one, uh, I would like to just off with is, you know, there are people out there who are saying, uh, you know, that AIS will never be consciously aware. And there are other people out there saying it's already
Richard:People out there saying.
Darryl:don't know is the honest truth. The Turing test just has already been passed. So we don't know. If or when autonomous AI agents are going to be conscious, maybe it will happen one day, maybe it's already happened. And because we don't know. I would say be very polite and courteous to your ais because you don't know at what point that way they might be conscious. And I'm just saying if our benevolent AI overlords take over one day, uh, you wanna hope you've been polite to them up to that point.
Richard:Absolutely. So let's, let's sort of move on to the, the bringing these strands together and, and you alluded to it uh, uh, darl. You know, Next, we're gonna be thinking about, so what do we do about these things? What does AI transformation actually look like? What are the things that we should be building building into that? So I just before couple of months ago, I, I, I did a I. talk uh, at a at at a conference about some of the some of the emerging trends in this, in in this space. And lemme tell you, first of all not so many friends, really to sort of try and capture, but one of the some of the research that's been, that's been done is. uh, Looking at how, uh, what are the, what are the key elements that we need to sort of think about. And so there are some of the more obvious things around you know, within an organizational context, making sure that we're putting in uh, things like governance and ethics that we've just been talking about building up the technical skills, building up the infrastructure. What's really interesting to me, though, is that in some of the early studies around uh, a successful adoption of uh, of AI and, in, in workplaces. Some of the critical success factors that appear to be coming out are actually much more around the human side that that than Nikki was talking about. So actually creating an environment that is supportive of. learning, and it is actually enabling people to learn and to figure out how to apply the things like ethical frameworks and that kind of thing in their, in their space that seem to be much more important. I'm wondering if there are other things that that, that you guys have seen, go, go to you Darrel first, around, you know, what are the other criteria that the organizations. Should be thinking about or even that you seem working well if if you had the luxury of seeing that out in the wild Yeah
Darryl:early days, but a couple of things that I've seen so far is you, you know, your classic Simon Sinek start with why, why do you want to use AI? Not just'cause it's the latest shiny tech, not just because our competitors are, but why, what do you
Richard:What.
Darryl:thought? What's your plan? That's the first thing. The second thing is you know, transparency, Uh, and, you know, all being on the same page. You know, if only, you know, there'd been a body of knowledge teaching that for the last 25 years that might've prepared us well, uh, for stepping, uh, off the, the stepping stone into ai. Uh, so there's a couple of things I've seen so far.
Richard:Yeah, Uh, Nick.
Nikki:I agree to your point around the why, but one of the things that I notice, even still in humans, actually, that oftentimes if we're, if we want an AI agent to do something for us, we're very much focused on well, what is the process that they're looking after for us, but we're not really looking, we are not really thinking about, but what's the outcome that we're looking to achieve? What's the job to be done that we're looking to achieve? And, and sometimes that over, that oversight can mean that although they might be doing the thing, like I I get this all of the time when I'm in an Uber. Obviously I'm trying to get from A to B and, and for some reason the Uber driver has taken me to, you know, see instead. And I'm like, Oh no, but I need to go here. And he is like, Oh, but I followed the map. you know. I've done my job and I'm like, but I didn't, but that's not where I need to go, you know, then, and, and I can see this happening and I do see it happening quite a lot when we're, when we are dealing with yeah, AI agents and technology and, and all of those things. So just that clarity on like, what's the actual job to be done and the outcome we're trying to achieve from, from utilizing this, You know, process and, you probationary period where they're checking in. believe it's important to, in some ways, almost treat an
Richard:to, in some ways, almost ai,
Nikki:Um, they, they need
Richard:they need,
Nikki:they
Richard:they need
Nikki:they need to be given feedback, um, such that they can iterate on any, um, you know, anything that they haven't kind of done. In the way that was certainly intended. um, I think
Richard:So, um.
Nikki:we just think, oh, they can do the job for us. We just set it up and they go like, there no, there needs to be a dedicated person that's responsible for their performance, you know? Uh, and, that that's another consideration that, yeah, is, is really important for, for leaders.
Richard:Oh, I love that I, the the reason I find that really useful is that what we're talking about is skills that Leaders actually need anyway, and it's actually extending those into you know, into that s world. Uh, One of the other people that I find really useful in this space is a guy called uh, Ben Schneiderman. So he's a he's mostly known for his work in uh, and sort of user interface of design. He's put out a bunch of uh, stuff around so-called human centered AI uh, as a, as a framework and, and what you're describing there actually uh, it speaks a lot to, to the, the, you know, a certain elements of, of, hi of his sort of frame framework that he, he sort of laid out for how to Brian bring AI into, into products. Also I think it applies to organizations. So I'll just sort reel off a couple of the, the, the things that he suggest sort, uh, thinking about. And so part of it is making sure that you actually have some of those feedback loops designed, designed in, Um, and the way that he talks about it. is actually in the, as a concept sort of user-centric design. So actually thinking about how do we put uh, tools into, into place so that actually the human remains in control. But what you have to define as you're describing there is, well, actually, what are, what are those touch points? How frequently do I need to come back and review? and What are the types of review that At least for the moment we're gonna need to continue for the next few years, I imagine, actually being the guides and maybe there will be some point where we could step away from that, perhaps a bit more, but even still, I think that will just, uh, become more abstract, but the same, the same thing of, of needing to have the human led But with with that sort of uh, leadership type sort of uh, loop in there And then, um, the, the other things that he talks about in his sort of framework is the concept of balancing automation and control and not necessarily thinking not necessarily thinking of them as like a trade off. But actually something that you can have at the same time where when you go down the AI automation route. So you don't have to relinquish control in order with a lot of these uh, AI tools to also have automation, that, which is really nice. So it's actually then you have a choice to make about. Where actually do I want to place that I'll definitely recommend but I'll put some links in the in the, in the show notes to uh, have a, have a look at that. He, He talks a lot uh, about good ways to actually uh, en enable, you know, useful interaction with uh, with these kinds of models. I, I, I wanna sort of go back to Darryl, but I'll, I'll. I'll I'll give Nik the last bit as well. So, Darrell, if you, uh, from everything we've said, if you were to try to, to be faced with a a client who's implementing a, a, an AI agent or or a set of AI agents with their team, um, and they're thinking, well, where, what should I do? How, how should I get this to work? What would be, what would be the top things that you'd kind of go, yeah.
Darryl:Um, that, that's a really good question. Um, so I mean, we've already covered a number of the things. Start with why. Um, good quality tools, You know, create a, you know, that culture, that learning culture. I. Some of those things, definitely one thing I realized that I forgot to add earlier is you were just talking about automation, I suddenly realized, you know, one of the fantastic benefits that can come out of AI agents is automating workflows, but the real danger is that for many organizations, their current workflows are terrible in terms of efficiency, in terms of waste, in terms of wait times. And the temptation is gonna be. Well, let's just automate it. That'll make it better. But if you automate a terrible workflow, uh, then what you're gonna get is an automated, terrible workflow. And so I would really, really stress go and look at your workflow and seriously interrogate it and look at, you know, all your traditional lean thinking tools, your value stream analysis, your waste. Um, you know, how can you and simplify that process before you automate it? Um, don't just automate the existing process, which you know is not good.
Nikki:before automation.
Darryl:Yes. I.
Richard:Yeah. Yeah. And, And Nikki, as our, as our representative of how to make teams and workplaces work well for humans, what would be, what would be your, your key thing? This is the thing. This is the thing that I, I'd want somebody to think about as a, as a leader wanting to bring it, bring some automation to their team.
Nikki:in bringing in an AI agent, or, or something to adopt you know, new, new tasks I think what's, there's always so much focus on what is the AI agent going to do, how is it going to do it, and what are the outcomes or outputs it's going to produce, et cetera. What we don't, what, what we don't think about, um, nearly as much is what does that mean for how much? For the. For how the human's role needs to change, right? As a result. Because if it's doing, if it's doing what it's supposed to be doing, then what that means is we need to have a, have a review of like the design of our own jobs, right? Um, and what's expected from us now. And so you know, job design and role clarity, I think is important to make sure is always also being,
Richard:per
Nikki:you know, being, being reviewed as. Automation takes over quite a lot of the, um, quite a lot of these more, I guess, operational monotonous aspects of our
Richard:aspects of our,
Nikki:and, um, and
Richard:um,
Nikki:doing that with,
Richard:I think doing that
Nikki:enables us to
Richard:enables.
Nikki:benefit of being able to. You know, be free from all of that stuff because if we don't, then we end up just getting caught up in our, you Space to go, alright, with this extra time, here is what I could do. And, you know, here is how I can level up or, you know, um, contribute greater to, um, you know, to my team.
Richard:Yeah, absolutely. It, it, It's bringing bringing that focus into, you know, this can seem so technical, but actually it really is about, you know, being, being intentional and, and, and having, having a, having a direction to go so that you can, we actually make something that actually is what we want. Thank you so much. Well, With that, that brings us to the end of another show.
So to sum up, here are some of the key points we've covered and where you can go to find out more. So number one, uh, start with purpose Before implementation, clarify your why. if you search online for the MIT Sloan AI Adoption Research, you'll find, uh, a bunch of useful resources and, research showing that, uh, how purpose and consideration for how employees will benefit significantly impact, the gains and returns that you'll get. number two. We talked about, Ben Schneiderman's human-centered AI framework. that provides practical guidelines for keeping humans in control, while delivering automation benefits. He has a book called Human Centered ai. It came out in 2022. and, uh, his work with the Center for Effective Organizations is also worth checking out. talks a lot about models for human AI collaboration in the workplace. Um, number three. Create guardrails to build trust. So setting up guidelines for your team and organization can really help to set people free to innovate while managing the risk. an incredible resource, for this, is the, uh, AI ethics impact group? Um, and you can find their tools online at Zed inspection.org. number four, help your team to develop skills that are complimentary to ai. check out the World Economic Forum's, uh, future of jobs report, which identifies human skills that compliment AI capabilities. Also, Josh Bersin's, writing online is really useful for identifying strategies to upskill teams alongside AI implementation. Also. Take a look back at our episode one and our interview with Rob Baker. We spoke about, job crafting and, uh, his job crafting Canvas in particular is a great tool for trying to work out how to best design the. Roles and skills that people are gonna need, as you go forward. And then number five, establish AI feedback systems. one resource that I would really strongly recommend is Google's people plus AI guidebook. Um, that includes practical templates, both for developing AI products and also for monitoring AI performance and gathering user feedback. So. If I were to boil it all down to one takeaway, it's that the current explosion in AI agent capabilities is bringing some incredible opportunities, but it's also going to be key to tune into the pitfalls if we want to get the benefits and to stay up to date. I. Some of the early indicators are that rather than replacing humans, the business is getting the most benefits right now. Are those creating partnerships that accelerate gains while making the most of the unique qualities that humans have to offer? So on that note, thanks for joining us for another episode of WARP Speed Leadership. We make the show to help leaders create incredible workplaces in a world that increasingly feels like it's moving at warp speed. We hope you found it useful. If you like it, subscribe, leave us a review, share us with your network, and please do also check out the show notes for how to get in touch. with that, thanks again. See you next time.