“Please tell me your problem.”
This phrase was programmed as a standard output by Eliza, widely regarded as the first AI program to use natural language processing to generate responses.
It was programmed to rephrase the key phrases in a speech input by the user into the format of a question or phrase to interact in the style and manner of a psychotherapist.
This was as early as 1966. Since then, AI has come a long way.
AI bots such as Bard or ChatGPT are now programmed to generate human-like responses in text format based on the data that it was trained on. This is combined with machine learning that allows it to learn from the data to make predictions.
In a software context it could be used to assist with tasks such as writing or debugging code. It also could also assist with translating texts, summarizing a document, provide automated responses to standard emails and even show its recommended way in how to best ask for a raise.
Whilst it can streamline the functioning of the workplace as we know it, there are however several risks that need to be mitigated.
Like the users to Eliza, I will “tell you about the problem”: I will look at the personal and professional challenges that AI bots can create and provide recommendations on how to mitigate it.
“Something Phishy”: Phishing emails
In short: Phishing emails are commonly used to entice an unsuspecting victim to click on a link or download an attachment.
In the past, typos, and unnatural language expressions in the body of the email or text were ways to identify such an email.
Whilst ethically-focused AI companies, such as OpenAI, have programmed their AI with the intent to reject such inappropriate user requests, there is still a possibility that it can be used by hackers to circumvent the language deficiencies of phishing content to generate more convincing phishing messages.
A prime example would be where a hacker could simply circumvent the ethics policy of ChatGPT by rephrasing the request in a “creative” manner. Therefore, with the ability of AI platforms to impersonate others, flawless text can be created, and code can be generated for misuse.
This presents an increased risk of business emails being compromised.
Malware Generation
AI bots have the ability to aid programmers to create software code in various programming languages, but on the other hand it also poses the risk of being used to create malicious software such as viruses, ransomware, and Trojans.
It is a scary thought: But an end-user with a rudimentary knowledge of malicious software can use the technology to write functional malware and even develop software which can be used to evade detection.
Deepfakes
These are videos or images that are altered to depict a real person but are entirely or partially computer-generated.
In a world that is already filled with conspiracy theories around each corner, there will be an increased need for users to exercise vigilance and caution when doing research or light reading.
Personal Data Risks
The personal data that is collected and used by OpenAI to train ChatGPT may inadvertently violate personal data protection laws: Whilst certain personal data may be publicly available on platforms or websites, it does not necessarily mean that the individuals or companies have provided consent for its use as training data.
There will be a need for companies to educate their employees on the importance of knowing what data is “personal” and on the importance of “opting out” to avoid its data becoming part of the AI’s training data.
Inaccurate or Misleading Information
As more and more parties use its functionality, AI bots are becoming increasingly vulnerable to accuracy and bias risks: It can only generate text based on the data on which it was trained. So, it follows that if the data sets that it is trained on contain errors, inaccuracies, or biases, this will be reflected in its responses.
If the datasets also contain insufficient resources, it may produce incorrect or incomplete answers. This opens the risk of damaging reputations or spreading misleading information. Time will tell what measures will be implemented to address this.
Intellectual Property and Infringement
Since AI bots are trained on large quantities of data, it may include works which are protected by intellectual property laws. Therefore, if the datasets that the AI is trained on contain these copyrighted works, the output will involve the reproduction of or at least a similarity to the copyrighted works. This would therefore give rise to the risk that the use of the output, could constitute copyright infringement.
Currently the legal position in respect of the output of AI chatbots under intellectual property laws are unclear and highly debatable: “Is the output protected? If so, who owns it?”
Needless to say, this could create unfortunate legal consequences.
Confidentiality
The platform could present the risk of disclosing information that is confidential in nature as an unwitting end-user could be providing the AI with the sensitive information, which would then form part of its database.
Plagiarism
AI platforms are seen as the ideal tools for users to get convenient and quick answers to queries. It represents itself as an especially exciting possibility to parties such as students to complete assignments and for journalists to write articles.
Aside from the risk of plagiarism that is compounded using the platforms in such a manner, it also discourages personal improvement and development as there is a “quick fix” that is readily available.
Liability
AI is generally provided on an “As Is”-basis without any warranties which means that it does not warrant the accuracy of its output at its aggregate liability is capped. The onus and risk are therefore on the user.
Impersonation
AI can generate output that is based on a real person’s voice and style: This could inadvertently result in an increased use in fraud and more whaling attacks.
Spam
AI bots could be used to generate workflow by generating spam text instantly and at an increased volume making detection even more challenging than ever before.
Now that the reader is aware of the possible risks, we will look at practical measures that can be implemented in Part 2 of this blog.