Is Your Organization at Risk? Assessing the Need for an AI Policy

Author, Megan Lockhart, Client Communications Coordinator, Rancho Mesa Insurance Services, Inc.

Since the release of ChatGPT in November 2022, it seems every technology platform is incorporating Artificial Intelligence (AI) into their products. However, concerns are quickly rising about the effects AI technology will have on every industry. As more companies begin to incorporate AI into their products and operations, now is the time to evaluate the necessity of an AI policy for your organization.

Although AI offers many benefits to businesses such as enhancing productivity, assisting in brainstorming and creativity, it can also pose new risks. Human services organizations like healthcare facilities and nonprofits may be particularly vulnerable to these risks. 

“It’s important not to blindly jump into AI technology without a proper plan in place,” Nick Leighton, business owner, best-selling author and motivational speaker, writes in Forbes Magazine. “You could be setting yourself up for costly mistakes and risks. Creating an AI policy doesn’t have to be overly complex. It’s best to start with simple guidelines that you can expand and adapt as your usage of the technology expands.”

One cause for concern when using AI platforms such as ChatGPT is that responses will not always generate accurate information. It is possible the sources used are incomplete, biased or flat-out errors. To avoid employees distributing false information to clients, content should always be proofread by a human being.

"Non-profit and human service organizations depend on the public's trust,” said Sam Brown, Vice President of the Human Services Group with Rancho Mesa Insurance. “Whether a development director uses AI to learn from donor data and personalize their experience or a program director uses AI to improve client outcomes, a thoughtful AI policy will ensure human oversight and minimize risk when implementing new technology."

Organizations also risk plagiarism when utilizing AI. While AI can help inspire ideas and creativity, if a company uses AI-generated ideas or guidance, it’s important to ensure content is original to avoid unknowingly distributing the work or insights of others to clients or the public. So, it is vital to know the AI platform’s responsibility for claims of copyright violations for its AI-generated works.

Arguably the most dangerous risk organizations face when utilizing AI as a tool for efficiency is the threat to security. Many AI platforms are designed to retain the information they are given, and use it to adapt. For example, asking ChatGPT to organize an excel spreadsheet of confidential client information could be absorbed into the system’s database for learning and pose a security risk. Organizations must fully understand how their data is used once it is input into an AI platform.  Is it stored and used for future learning, or is it deleted immediately? These are questions that must be asked before implementing AI into any organization.