6 Reasoning with word embeddings
This chapter covers
- Understanding word embeddings, or word vectors
- Representing meaning with a vector
- Customizing word embeddings to create domain-specific nessvectors
- Reasoning with word embeddings
- Visualizing the meaning of words
Word embeddings are, perhaps, the most approachable and generally useful tools in your NLP toolbox. They can give your NLP pipeline a general understanding of words. In this chapter, you will learn how to apply word embeddings to real-world applications. And, just as importantly, you’ll learn where not to use word embeddings. Hopefully, these examples will help you dream up new and interesting applications in business as well as in your personal life.
You can think of word vectors a bit like lists of attributes for role-playing game characters or Dota 2 heroes. Now, imagine there was no text on these character sheets or profiles. You would want to keep all the numbers signifying the character attributes in a consistent order so that you know what each number means. That’s how word vectors work. The numbers aren’t labeled with their meaning; they are just put in a consistent slot or location in the vector. That way, when you add, subtract, or multiply two word vectors together, the attribute for strength in one vector lines up with the strength attribute in another vector—likewise for agility, intelligence, alignment, or philosophy attributes in Dungeons and Dragons.