Several months ago, I wrote a blog on Shadow IT, emphasizing the risks of unapproved software and systems used by employees without the knowledge of the IT department. Shadow IT can lead to significant security vulnerabilities, as it is impossible to protect information if you don’t know it exists.
Shadow IT refers to the use of unapproved software and systems within an organization. For example, a Compliance Officer might adopt a cloud solution to manage regulatory challenges, or a Loan Officer might use a customer acceptance tool—all without the IT department’s knowledge. These unauthorized tools can create security gaps and complicate compliance efforts. Typical examples include marketing automation tools, expense management software, project management platforms, and more.
The presence of these “shadow” applications makes it difficult to maintain a comprehensive understanding of your institution’s technology landscape. This is why having a Detailed System Inventory is crucial. At Finosec, we refer to ours as a System Inventory Map. This tool helps to identify all applications in use, determine who has access to each, and assess the risk associated with every system. Now, let’s extend this concept to the realm of artificial intelligence: Shadow AI. Just as employees may use unauthorized software, they also often turn to AI tools such as ChatGPT, even when these tools are banned. Blanket bans on such tools are frequently ineffective, as employees find ways to use them regardless. According to one survey, 8% of employees at companies banning ChatGPT admitted to using it anyway! Furthermore, third-party vendors might integrate AI into their product sets without the financial institution being aware, compounding the risk.
The Risks of Shadow AI
Shadow AI introduces several risks, including:
- Data Security: Unapproved AI tools may not adhere to the same security standards as sanctioned software, leading to potential data breaches or leaks.
- Compliance: Using AI tools without proper oversight can lead to non-compliance with industry regulations and standards.
- Unintended Outputs: AI can produce outputs that might not align with the institution’s guidelines, leading to reputational risks or operational issues.
- Data Privacy: AI tools often require significant data input, which may include sensitive or personal information, risking violations of privacy laws and regulations.
Governance Protections for Financial Institutions
To mitigate the risks associated with Shadow AI, financial institutions should focus on the following key areas:
- Conduct Regular Risk Assessments: Periodically evaluate all AI tools and applications in use, assessing their compliance with security standards and regulatory requirements. This helps in identifying potential risks and implementing necessary controls.
- Maintain a Detailed AI Inventory: Create and regularly update a comprehensive inventory of all approved AI tools and third-party applications using AI. This inventory should include information on who uses each tool, what data it accesses, and its security and compliance status.
- Perform User Access Reviews: Regularly review user access to ensure that only authorized personnel have access to critical AI tools and data. This helps identify and remove any unnecessary access permissions, minimizing the risk of data breaches.
- Foster a Culture of Transparency: Encourage employees to report any new AI tools they are using. Create a non-punitive environment where the focus is on improving security rather than penalizing individuals. Transparency helps in maintaining an accurate inventory and mitigating risks associated with Shadow AI.
- Regular Training and Awareness: Educate employees about the risks associated with Shadow AI and the importance of adhering to established AI policies. Regular training sessions can help reinforce this message, ensuring employees are aware of the potential risks and how to mitigate them.
- Focus on Third-Party Risk Management: Regularly update and track which current third-party applications use AI. Include third-party AI usage in third-party risk assessments and annual monitoring checklists. Ensure that any AI integrations by third-party vendors are known, assessed, and managed appropriately to mitigate risks.
By implementing these strategies, financial institutions can significantly reduce the risks associated with Shadow AI and ensure a secure, well-governed technology environment. Just as with Shadow IT, understanding and managing the use of AI tools is crucial in protecting your institution’s information assets, ensuring both security and compliance. Remember, you can’t protect your data if you don’t know where it is.