This chapter covers
- Understanding how generative models can be exploited for adversarial attacks
- Discussing the unwitting participation of chatbots in political debates
- Exploring the causes of LLM hallucina- tions and techniques to reduce them
- Examining the occupational misuse of chatbots in specialized knowledge fields
Since ChatGPT was made available to the public in November 2022, people have shared malicious use cases they’ve observed or themselves tested successfully and speculated about how else it might be misused in the future. “AI Is About to Make Social Media (Much) More Toxic,” argued a story in The Atlantic [1]. ”People are already trying to get ChatGPT to write malware,” reported ZDNET about a month following the tool’s release [2]. Because anyone could chat with the model, the sources of discovery of many of these revelations weren’t AI experts, but general public, sharing their findings on Twitter and Reddit. As we’ve seen in the worlds of cybersecurity and disinformation, people are endlessly creative when it comes to using new tools to achieve their ends.