Managing the Risks Associated with the Use of Generative Artificial Intelligence at the Company Level – the Need for Internal Policy Implementation
Generative Artificial Intelligence (GAI), viewed as a branch of artificial intelligence, aims to create models and algorithms capable of generating new and creative content. Thus, by automatically learning complex patterns and subsequently using this information, the newly produced content can take the form of texts, images, sounds, or even entire virtual environments. Therefore, the generated content, originally intended to simulate human-created content, can even involve the generation of source code, automated scripts, or interactive content in games and virtual environments.
However, alongside the numerous advantages that come with the use of this form of AI, several risks and associated challenges can be observed within a company, at least initially. The aspects that raise the most concerns from this perspective are intellectual property, cybersecurity, and data confidentiality and security.
Regarding intellectual property, the difficulties that arise specifically involve the automatic creation of content that closely resembles, to the point of confusion, works protected by copyright without explicitly infringing on them. This can lead to legal disputes. Furthermore, content generated by generative AI can be used to create or distribute plagiarized content or even support piracy, such as the illegal copying of copyrighted works. Therefore, to manage the emergence of these imminent risks, it is important to consider several measures, such as the development of content plagiarism detection technologies, encouraging developers and users to respect intellectual property rights, and constantly updating the relevant legislation to keep pace with the new challenges associated with the use of generative artificial intelligence.
Concerning cybersecurity aspects, the use of generative AI can give rise to malicious content, such as malware, phishing attacks, or spam messages, which are more difficult to detect than their manual counterparts. Additionally, another significant risk for a company is represented by Distributed Denial of Service (DDoS) attacks, which lead to the unavailability of online services and, consequently, massive financial losses for the company. Furthermore, generative AI can be used to create misleading images that can deceive facial recognition systems, potentially affecting the physical and digital security of the company. As previously mentioned, combating these issues can involve educating users to recognize such attacks and coordinating efforts at both the private and public sector levels to develop effective counterstrategies.
Last but not least, the major risk associated with data confidentiality and security within a company involves the fact that, by providing information, which may include personal data, developers of GAI systems may end up possessing trade secrets or other confidential or sensitive information, which could then be included among the results provided to other users of the system. Therefore, a measure to address this risk could be the encryption of data in transit and at rest, providing additional protection against unauthorized access.
Consequently, developing an internal policy within a company to combat GAI-associated risks is a crucial step in protecting the company against potential threats.
In this regard, a first step would involve designating individuals and departments responsible for policy implementation and enforcement, individuals who should possess knowledge and expertise in the field.
Furthermore, the company should conduct a comprehensive risk assessment related to the use of GAI and relevant threat scenarios for the firm. Additionally, the specific policies adopted can include requirements for data encryption, data access management, activity monitoring, as well as certain communication and notification procedures in the event of a security incident. All of these procedures should be included in an incident response plan, allowing for the efficient management of emergencies and the minimization of potential damages. Lastly, in addition to conducting periodic simulations for employee training and beyond all the described measures, the internally developed policy should be extremely clear, well-communicated to employees, thereby becoming an integral part of the company’s culture of security and confidentiality.
➡📞Contact: (+4) 031 426 0745 – email@example.com
Ana Maria Nistor – Attorney at Law