Home Thinking Aloud AI Adoption Problems Are No Longer A Tech Issue – They’re A...

AI Adoption Problems Are No Longer A Tech Issue – They’re A Culture Issue

307
0

by Jared Navarre, CEO – Keyni Consulting & Onnix

AI doesn’t live in a data center. But most companies treat it like it does, which means they see AI adoption problems as technology problems.

If that’s your perspective, you’re probably looking to the wrong people and the wrong processes to ensure smooth AI adoption, integration, and engagement. You’re also probably not getting maximum value from your AI investments.

Unlike most tech tools, AI isn’t simply an API you tap into occasionally to process a sale or a platform running passively in the background. Once AI is deployed, it quickly becomes part of many everyday business decisions. Companies lean on it in an interactive and personalized way for hiring, pricing, messaging, approvals, communications, and more.

AI is built on a company’s culture more than on its technology stack. Consequently, you won’t get the full benefits of AI if you don’t approach its adoption as a culture issue. The following are some key steps you’ll need to take as you shift to this approach.

Craft a culture willing and able to hold AI accountable

With most tech tools, the key to maximizing their impact is keeping them operational. If the CRM goes down, someone quickly submits an IT ticket, knowing that their effectiveness relies on its availability and functionality.

But it’s different with AI. It not only needs to be operational, but also accountable. And to ensure healthy adoption, the culture needs to hold it accountable.

To appreciate the importance of accountability, think about what happens when AI “goes down.” Perhaps that means it isn’t accessible. It could also mean it is completely accessible, but spitting out deeply flawed results. That’s why a culture of accountability is essential. When AI goes off the rails, someone needs to sound the alarm.

Define good judgment and evaluate whether AI is exercising it

You can weave accountability into the culture by creating a team responsible for determining what good judgment looks like as it relates to AI. Basic AI tools make judgments all day long in the workplace, from determining correct grammar to assessing consumer intent to identifying applicants who would be a good fit. And expecting those judgments to be spot-on every time is dangerous.

Tech experts have come to refer to AI as an “infinite intern,” warning that it needs guidance from experienced mentors before it can grow into a trustworthy workplace contributor. In your workplace, someone needs to commit to making sure your intern is making good decisions — the type of decisions that make sense generally and also in the context of your unique operations.

Empower employees to watch for problems and provide feedback

If not encouraged otherwise, employees will typically distance themselves from AI and any subpar performance. Remember that this is the natural response. Employees do it not only to protect themselves but also out of fear of the unknown.

To push back against the natural response, companies need to build AI accountability into their culture. A human needs to take ownership of the judgments AI is making if adoption is to be effective. Empower that behavior by encouraging oversight and feedback.

Normalize experimentation and demand transparency

With some tech tools, the hurdles to adoption are on the hardware side. That’s not the case with AI. If companies experience an adoption bottleneck, it’s going to be a culture bottleneck caused by employees who don’t want to engage with it.

To remove cultural bottlenecks, companies need to normalize experimentation. Encourage people to take risks with AI, leveraging it for a wide range of tasks. They should still be willing to evaluate its decisions and hold it accountable when it falls short, but they shouldn’t be afraid of getting punished in some way for putting it through its paces.

By creating space to experiment with AI, companies establish a sense of psychological safety. Give employees guidelines on what is appropriate and let boundaries be pretty expansive. Allowing more experiences — especially experiences that don’t result in criticism — makes it easier for employees to trust and adopt AI.

One caveat with AI experimentation is that it should go hand in hand with transparency. For everyone to play a role in oversight, everyone needs to know when AI was involved. Assume your intern’s work is error-free, and you risk the company’s reputation.

Create an environment that fosters trust

With traditional tech tools, adoption is constrained by the technology itself. If the tech isn’t intuitive, reliable, or effective, it won’t fly.

With AI, however, adoption is constrained by culture. Consequently, leaders who want to gain advantages from AI need to create an environment that encourages employees to trust AI and see it as a new team member who, with the right oversight, can multiply capacity.

 

Jared Navarre is founder and CEO of Keyni Consulting, CEO of Onnix, and chairman of the humanitarian NGOs IN-Fire and Project AK-47. He is a systems strategist and operational architect known for solving complex, high-stakes problems across technology, healthcare, infrastructure, and public-sector operations. He has designed resilient frameworks for humanitarian networks and guided over 250 organizations through moments of rapid change.