10 Large Language Models in the real world
- Understanding how conversational LLMs like ChatGPT work
- Jailbreaking an LLM to get it to say things its programmers don’t want it to say
- Recognizing errors, misinformation, and biases in LLM output
- Fine-tuning LLMs on your own data
- Finding meaningful search results for your queries (semantic search)
- Speeding up your vector search with Approximate Nearest Neighbor Algorithm
- Generating fact-based well-formed text with LLMs
10.1 Large Language Models (LLMs)
10.1.1 Scaling up
10.1.2 Guardrails (filters)
10.1.3 Red teaming
10.1.4 Smarter, smaller LLMs
10.1.5 Generating warm words using the LLM temperature paramater
10.1.6 Creating your own Generative LLM
10.1.7 Fine-tuning your generative model
10.1.8 Nonsense (hallucination)
10.2 Giving LLMs an IQ boost with search
10.2.1 Searching for words: full-text search
10.2.2 Searching for meaning: semantic search
10.2.3 Approximate nearest neighbor (ANN) search
10.2.4 Choose your index
10.2.5 Quantizing the math
10.2.6 Pulling it all together with haystack
10.2.7 Getting real
10.2.8 A haystack of knowledge
10.2.9 Answering questions
10.2.10 Combining semantic search with text generation
10.2.11 Deploying your app in the cloud
10.2.12 Wikipedia for the ambitious reader
10.2.13 Serve your "users" better
10.3 Test yourself