Add XTR retrieval and training support#207
Open
robro612 wants to merge 16 commits intolightonai:mainfrom
Open
Conversation
…(non-plaid). This supports multiple imputation functions: min (default), zero, mean, percentile, or power_law. Although min is default and highly suggested.
- Fix broken doctest in xtr.py (missing add_documents call) - Fix progress bar total using ceil division - Use torch.isinf for robust missing-score detection in score_xtr - Rename misleading variable neg_b -> slope in power-law imputation - Export Base from indexes.__init__ and use public API in colbert.py - Add unit tests for _compute_imputation_scores (rectangular, ragged, edge cases) - Add beir_dataset_xtr.py eval script supporting both ColBERT and XTR retrieval - Fix test_xtr_retriever.py: name -> index_name kwarg for ScaNN
…efix* Some LI models nowadays (e.g. XTR) don't use Q/D prefix tokens. This changes the behavior of the model initialization to no longer strongly default to adding [Q] and [D] prefix tokens. This is probably an opinionated and *breaking* change for some people's use case (if they assume e.g. during training initialization it will add the prefix tokens). Another method might be to use the currently inert `add_special_tokens` argument which currently does not impact the logic at all.
Add requires_full_batch scoring handling in Contrastive to handle XTR scoring, which requires all documents in the batch be passed at once in order to properly do the "in batch retrieval". This is automatically handled by Contrastive by checking the score function for an attribute set in scores.py.
- CachedContrastive now checks score_metric.requires_full_batch and passes all documents at once (stacked as (batch, N, Dt, H)) instead of chunking over document groups. Query mini-batching still controls memory. Labels adjusted to i*N for interleaved doc layout. - Fix xtr_scores to derive mask expansion from query batch size (Qb) rather than document batch size (Dq), which caused IndexError when Qb != Dq (i.e. CachedContrastive mini-batching). - Add regression tests for mismatched query/doc batch sizes. - Add sweep/validation test script and SLURM array job for benchmarking memory and correctness across scoring/loss/batch-size configurations. - Add design doc noting streaming top-k as future optimization.
…ce citation - added temperature param to Distillation - found to be necessary for XTR training. - xtr_kd_scores is a thin wrapper around existing xtr_scores function that returns only the in-example scores, fitting the Distillation interface. token scores are already required, so there's no memory overhead/minimal computation to be saved by rewriting.
- remove print statements - contrastive labels multigpu fix - requires_full_batch decorator for xtr_scores - restore hpool logic in colbert.py (from overeager cherrypick) - consolidated XTR test files - removed non-min imputation options.
…no looping to exhaust examples).
…for setting/using default k_train values easily.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds XTR (conteXtual Token Retrieval) support to PyLate, covering both retrieval and training.
1. XTR Retrieval
XTR retrieval performs per-token approximate nearest neighbor search followed by document-level scoring with min-imputation, as an alternative to ColBERT's full late-interaction reranking.
pylate/retrieve/xtr.py— newXTRretriever class for XTR-style retrieval on Voyager/ScaNN indexespylate/rank/rank.py— addsscore_xtr()for document-level XTR scoring with min-imputationpylate/rank/__init__.py— exportsscore_xtrpylate/retrieve/__init__.py— exportsXTRretrieverpylate/retrieve/colbert.py— generalize type hint fromVoyager | PLAIDtoBaseindextests/test_xtr_retriever.py,tests/test_xtr_scoring.py2. XTR Training
XTR training uses a token-level top-k scoring function with z-normalization instead of ColBERT's MaxSim. This requires seeing all documents at once (for global top-k), so both loss classes are updated to support a
requires_full_batchscore function.pylate/scores/scores.py— newxtr_scoresandxtr_kd_scoresfunctions, plusXTRScores/XTRKDScorescallable classes for convenient default-k configuration. These classes setrequires_full_batch = Trueas a class attribute, which the loss functions check to determine whether to pass the full document batch at once or chunk by group.pylate/scores/__init__.py— exports new scoring functions/classespylate/losses/contrastive.py— detectrequires_full_batchscore metrics and pass all documents at once instead of chunking by grouppylate/losses/cached_contrastive.py— samerequires_full_batchsupport for cached contrastive trainingpylate/losses/distillation.py— addtemperatureparameter for XTR KD trainingtests/test_xtr_scores.py