4 Crowdsourced relevance
This chapter covers
- Harnessing your users’ collective insights to improve the relevance of your search platform
- Collecting and working with user behavioral signals
- Using reflected intelligence to create self-tuning models
- Building an end-to-end signals boosting model
In chapter 1, we introduced the dimensions of user intent as content understanding, user understanding, and domain understanding. To create an optimal AI-powered search platform, we need to be able to combine each of these contexts to understand our users’ query intent. The question, though, is how do we derive these understandings?
We can learn from many sources of information: documents, databases, internal knowledge graphs, user behavior, domain experts, and so on. Some organizations have teams that manually tag documents with topics or categories, and some even outsource these tasks using tools like Amazon Mechanical Turk, which allows them to crowdsource answers from people all around the world. For identifying malicious behavior or errors on websites, companies often allow their users to report problems and even suggest corrections. All of these are examples of crowdsourcing—relying upon input from many people to learn new information.