chapter four

4 Misuse and Adversarial Attacks

 

This chapter covers

  • Understanding how generative models can be exploited for adversarial attacks
  • Discussing the unwitting participation of chatbots in political debates
  • Exploring the causes of hallucinations and techniques to reduce them
  • The occupational misuse of chatbots in specialized knowledge fields

Since ChatGPT was made available to the public in November 2022, people have shared malicious use cases that they observed or themselves tested successfully, and speculated about how else it might be misused in the future. "AI is About to Make Social Media (Much) More Toxic," argued a story in The Atlantic. [1] "People are already trying to get ChatGPT to write malware," reported ZDNET about a month following the tool's release. [2] Because anyone could chat with the model, the sources of discovery of many of these revelations weren't AI experts, but the general public, sharing their findings on Twitter and Reddit. As we have seen in the worlds of cybersecurity and disinformation, people are endlessly creative when it comes to using new tools to achieve their ends.

4.1 Cybersecurity and Social Engineering

4.2 Adversarial Narratives

4.3 Political Bias and Electioneering

4.4 Hallucinations

4.5 Occupational Misuse

4.6 Summary