chapter two

2 Data Privacy and Safety

 

This chapter covers

  • Training large language models (LLMs) with open web data collection, autoregression and bidirectional token prediction, and fine-tuning.
  • The potential emergent abilities, harms, and vulnerabilities that come from training LLMs.
  • Improving “desirable” outputs from LLMs: post-processing detection algorithms, content filtering or conditional pre-training, reinforcement learning from human feedback (RLHF), and constitutional AI or reinforcement learning from AI feedback (RLAIF).
  • Mitigating privacy risks with user inputs to chatbots.
  • Understanding data protection laws in the U.S. and the European Union (EU).

For decades, the digital economy has run on the currency of data. The digital economy of collecting and trading information about who we are and what we do online is worth trillions of dollars, and as more of our daily activities have moved on to the internet, the mill has ever more grist to grind through. Large language models are inventions of the internet age, emulating human language by vacuuming up terabytes[10] of text data found online.

2.1 How are LLMs Trained?

2.1.1 Open Web Data Collection

2.1.2 Autoregression and Bidirectional Token Prediction

2.1.3 Fine-Tuning

2.2 Emergent Properties of LLMs

2.2.1 Learning with Few Examples

2.2.2 Emergence as an Illusion

2.3 Considerations in Training Data

2.3.1 Encoding Bias

2.3.2 Sensitive Information

2.4 Strategies for Improving Generations from a Safety Perspective

2.4.1 Post-Processing Detection Algorithms

2.4.2 Content Filtering or Conditional Pre-Training

2.4.3 Reinforcement Learning From Human Feedback

2.4.4 Constitutional AI

2.5 User Privacy and Commercial Risks

2.5.1 Inadvertent Data Leakage

2.5.2 User Best Practices

2.6 Data Policies and Regulations

2.6.1 International Standards and Data Protection Laws

2.6.2 Are Chatbots Compliant with EU-GDPR?

2.6.3 Privacy Regulations in Academia

2.6.4 Corporate Policies