par naver
Open source · 116k downloads · 93 likes
SPLADE v3 est un modèle d'encodage sparse avancé conçu pour la recherche sémantique et la récupération d'informations. Il transforme des phrases ou des paragraphes en vecteurs épars de haute dimension (30 522 dimensions), permettant une indexation et une recherche efficaces dans de grands corpus textuels. Contrairement aux approches denses, il génère des représentations claires et interprétables, où chaque dimension correspond à un terme du vocabulaire, facilitant ainsi l'explicabilité des résultats. Ce modèle se distingue par son entraînement optimisé combinant des techniques de distillation et de margin learning, améliorant significativement sa précision par rapport aux versions précédentes. Il est particulièrement adapté aux systèmes de recherche nécessitant à la fois performance et transparence, comme les moteurs de recherche ou les outils de question-réponse.
This checkpoint corresponds to a model that starts from SPLADE++SelfDistil (naver/splade-cocondenser-selfdistil), and is
trained with a mix of KL-Div and MarginMSE, with 8 negatives per query sampled from SPLADE++SelfDistil. We used the original MS MARCO
collection without the titles.
For more details, see our arXiv companion book: https://arxiv.org/abs/2403.06789
To use SPLADE, please visit our GitHub repository: https://github.com/naver/splade
| MRR@10 (MS MARCO dev) | avg nDCG@10 (BEIR-13) | |
|---|---|---|
naver/splade-v3 | 40.2 | 51.7 |
This is a SPLADE Sparse Encoder model. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("naver/splade-v3")
# Run inference
queries = ["what causes aging fast"]
documents = [
"UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again â\x80\x93 single words and multiple bullets.",
"Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly â\x80\x94 or who experiences a sudden decline â\x80\x94 should see his or her doctor.",
"Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[11.3768, 10.8296, 4.3457]])
If you use our checkpoint, please cite our work:
@misc{lassance2024spladev3,
title={SPLADE-v3: New baselines for SPLADE},
author={Carlos Lassance and Hervé Déjean and Thibault Formal and Stéphane Clinchant},
year={2024},
eprint={2403.06789},
archivePrefix={arXiv},
primaryClass={cs.IR},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}