Skip to main content

By now, you have probably asked some AI LLM for suggestions on how you can build, release, support and enhance your offerings. Isn't it amazing how  helpful, supportive, knowledgeable and confident AI is? But is there a risk that it is confidently wrong? What are the signs of this risk materializing and how can you protect your organization from it?

Scroll down to watch a a video where I share a simple analogy to illustrate how we monitor against this risk in the world of financial investing and how we might adapt the approach to the world of AI adoption.

Here are some reflection questions that might help you apply the ideas in this video to your context...

  • What is the likelihood of this risk impacting your organization's AI adoption? How do you know?

  • How do you monitor this risk? How do you respond if you start noticing the signs that it is materializing?

  • What might be the impact if this risk becomes a reality? How would you recover from it?

Let me know how this lands and how you are navigating these risks. And give me suggestions on topics for my next blog.

 

 

 

Comments