PyData Global 2023

Building Learning to Rank models for search using Large Language Models
12-08, 19:00–19:30 (UTC), LLM Track

The presentation describes a case study where Large Language Models were used to generate query-document relevance judgements. These judgements were then used to train Learning to Rank models which were used to rerank search results from an untuned engine, resulting in almost 20% increase in precision.


Search engineers have many tools to address relevance. Older tools are typically unsupervised (statistical, rule based) and require large investments in manual tuning effort. Newer ones involve training or fine-tuning machine learning models and vector search, which require large investments in labeling documents with their relevance to queries.

Learning to Rank (LTR) models are in the latter category. However, their popularity has traditionally been limited to domains where user data can be harnessed to generate labels that are cheap and plentiful, such as e-commerce sites. In domains where this is not true, labeling often involves human experts, and results in labels that are neither cheap nor plentiful. This effectively becomes a roadblock to adoption of LTR models in these domains, in spite of their effectiveness in general.

Generative Large Language Models (LLMs) with parameters in the 70B+ range have been found to perform well at tasks that require mimicking human preferences. Labeling query-document pairs with relevance judgements for training LTR models is one such task. Using LLMs for this task opens up the possibility of obtaining a potentially unlimited number of query judgment labels, and makes LTR models a viable approach to improving the site’s search relevancy.

In this presentation, we describe work that was done to train and evaluate four LTR based re-rankers against lexical, vector, and heuristic search baselines. The models were a mix of pointwise, pairwise and listwise, and required different strategies to generate labels for them. All four models outperformed the lexical baseline, and one of the four models outperformed the vector search baseline as well. None of the models beat the heuristics baseline, although two came close – however, it is important to note that the heuristics were built up over months of trial and error and required familiarity of the search domain, whereas the LTR models were built in days and required much less familiarity.


Prior Knowledge Expected

No previous knowledge expected

Sujit Pal is an applied data scientist at Elsevier Health, where he spends his time applying ML and NLP techniques to improve the quality of search results in various clinical applications. His areas of interests include Semantic Search, Natural Language Processing, Machine Learning and Deep Learning.