Recent studies, including one by Forrester Research, have highlighted the growing concern among experts about the use of unregulated artificial intelligence (AI) tools in workplaces, a trend known as ‘Bring Your Own AI’ (BYOAI). This term refers to employees using external AI services for company-related tasks without official approval.
The report underscores the risks associated with this trend, especially in the context of rapidly evolving generative AI technologies like ChatGPT, DALL-E2, and Midjourney. These technologies are increasingly being used for various purposes, including AI-infused software and cloud-based APIs, leading to potential security risks. Andrew Hewitt, a principal analyst at Forrester, points out that the main challenge lies in employees using ‘shadow IT’, applications not sanctioned by their business, which could have vulnerabilities.
The risks identified include data loss and potential copyright violations. Hewitt notes that the risks with BYOAI could be more significant than those associated with ‘bring your own device’ (BYOD) policies, as controlling external AI services is more challenging. Sam Shaddox, from SeekOut, advises cautious use of generative AI, recommending legal consultation in case of doubt. He suggests employees question the sensitivity of the information they input into AI tools.
The report also examines the broader implications of generative AI in the corporate world, raising questions about its potential to revolutionize work and its impact on employee roles. However, there is increasing concern about the technology becoming a pathway for phishing, malware, and data breaches. Following the launch of ChatGPT and subsequent reports of user account breaches, many organizations have recognized the potential for these AI platforms to compromise data privacy and security. This has led to actions by various entities, including the Biden administration, to mitigate such risks. An executive order issued by President Biden aims to protect networks from AI-driven cyberattacks.
The International Monetary Fund’s report also acknowledges the benefits of generative AI in financial institutions but warns of privacy concerns, unique cyber threats, and systemic risks. In response, companies like Apple, Samsung, JPMorgan Chase, Bank of America, Wells Fargo, and Citigroup have restricted or banned the use of generative AI platforms by employees at work, to control the leakage of sensitive information and protect against hacking.
On the HR front, Jason Walker of Thrive HR Consulting notes that HR departments lag in adopting and understanding generative AI, including its security implications. He suggests that HR’s reluctance might lead to IT departments driving the agenda on AI tool selection, potentially leading to suboptimal choices for HR functions. Hewitt predicts that generative AI will push HR professionals to be more involved in communicating company values and implementing policies to protect sensitive information and counteract bias in AI-generated content. The rise of BYOAI presents a complex challenge, intertwining technological innovation with cybersecurity, legal compliance, and corporate governance.