Using by employees of AI-based software such as ChatGPT, Bing AI, but also translation software including Deepl and Google Translate may lead to a data protection breach, including personal data, as these types of services in principle save and further use the information entered.
This means that an employee’s use of software such as Deepl in a free version to, for example, translate business documents or email correspondence containing personal data should be treated as a data protection incident which, depending on the assessment, may be notifiable to the Data Protection Authority.
Consequently, the use of AI should also be treated as a business risk that needs to be properly identified and managed. Existing company rules and regulations and policies for working with data should be updated to include the use of artificial intelligence and software using it, in order to raise employee awareness and prevent data security incidents.
If you have any questions, please contact:
ALEKSANDER KARANDYSZOWSKI
Attorney at law
biuro@cmwlegal.pl
+48 91 886 24 01