The increasingly widespread use of ChatGPT or other forms of generative artificial intelligence, such as Bing or Bard, can it lead to a disciplinary sanction or even dismissal with or without just cause by an employer?
The temptation can be great for employees to use a generative AI such as ChatGPT to assist them in their work, for example, to explore the source code of a database to detect errors, check program code lines, or simply prepare internal or external presentation or explanatory documents, or even to summarize a meeting.
However, by doing so, the employee may commit an offense that could lead to a disciplinary sanction or even dismissal for just cause.
The first and foremost problem is that using ChatGPT can severely breach the employer's data confidentiality. For the AI to perform its tasks, it needs to be fed information and given access to documents or data that might be strictly confidential or constitute a trade secret. This information will not only be transferred to OpenAI's servers, the company that developed ChatGPT, but will also be reused for other purposes.
That's why many companies, like JP Morgan, Barclays, and Samsung, have already prohibited their employees from using ChatGPT on their IT tools or with their data.
Another issue is when an employee is hired to perform a specific task within a certain time frame. If he or she delegates this task to ChatGPT, the employee might not be fulfilling their obligations.
The third issue is that sharing personal data with ChatGPT might violate privacy laws.
These breaches, in my opinion, could lead to at least a disciplinary sanction or even dismissal for just cause, depending on the circumstances.
Of course, a well-informed employer will take care to detail in their internal policies the rules related to the use of generative AI. They will specifically include in the employment regulations that a breach of these rules could warrant disciplinary action or even termination for serious misconduct.