Who Should Own the AI in Your Org

Am just getting out of a excellent networking chat and was presented with probably the most fun question to date about artificial intelligence (AI) in orgs: who should own the AI in the org?

It is such a marvelous question because there such nuance to the answer. Many people see an entire service like Perplexity or ChatGPT and think that’s what meant. Others see the data which trains models as the same as the systems (agents, workflows, etc) built on top of them as the answer. And then there are a few who see the outputs (the decisions made as a result of applying model(s) to a thing) as the basis for answering. Here’s a snippet of how we responded to that question.

If an org has the capacity to assume, build, or deploy a service or product, they should have a constantly updated data model and system model diagram/map to what they are doing. They should have a clear understanding of the inputs and outputs, and in the case of services the off-ramps, for what they offer. It might take a service designer to map this initially, but it needs to be done before and while a product/service evolves. Before applying any another model or framework, one needs to know their own.

After this, an org needs to be clear about accountability to the decisions they make and enable their employees to make. Having a solid risk model or modeling software is where this lands. If the company governance doesn’t account for the agency of people to make decisions to respond to real-time or delayed issues against that system model/data model, then no amount of “add AI to it” will make things better. In fact, it will make things much, much worse.

Last part of our response was to have some clear process of learning, assimilating, and deploying LLM/AI. This can (should) start on the level of a practice; brining interested persons together to share experiments and observations. If things need a budget, look at taking that practice and making a time-boxed initiative from it. Time and finance boundaries are key here. Figure out the specific things you want to learn, and be “ok” with much of this being throw-away work. If there’s a jump from this initiative, it becomes a feature-mission of the org, also time-bound, but with specific execution requirements and executive accountability. And if that goes well, it becomes part of the character/DNA of the org. Each of these steps allows for some level of learning, adopting, and even “being wrong.” But if done in this manner, you’d have clarity at worst, and a resilient org at best.

So, who owns the AI in your org? Or, does AI own your org because your org is owned by the trends of the age, versus their product/service mission? Might be worth chatting with folks who can help you thru such things ;-)