Chapter 8. Content-based image search
This chapter covers
- Searching for images based on their content
- Working with convolutional neural networks
- Using query by example to search for similar images
Traditionally, most users use search engines by writing text queries and consuming (reading) text results. For that reason, most of this book is focused on showing you ways neural networks can help users search through text documents. So far, you’ve seen how to
- Use word2vec to generate synonyms from the data ingested into the search engine, which makes it easier for users to find documents they may otherwise miss
- Expand search queries under the hood via recurrent neural networks (RNNs), giving the search engine the ability to express a query in more ways without asking the user to write all of them
- Rank text search results using word and document embeddings, thus providing more-relevant search results to end users
- Translate text queries with the seq2seq model to improve how the search engine works with text written in multiple languages and better serve users speaking different languages