Chapter 5. Ranking search results with word embeddings
This chapter covers
- Statistical and probabilistic retrieval models
- Working with the ranking algorithm in Lucene
- Neural information retrieval models
- Using averaged word embeddings to rank search results
Since chapter 2, we’ve been building components based on neural networks that can improve a search engine. These components aim to help the search engine better capture user intent by expanding synonyms, generating alternative representations of a query, and giving smarter suggestions while the user is typing a query. As these approaches show, a query can be expanded, adapted, and transformed before matching with the terms stored in the inverted indexes is performed. Then, as mentioned in chapter 1, the terms of the query are used to find matching documents.