AI/EXPLORER
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium
—AI Tools
—Sites & Blogs
—LLMs & Models
—Categories
AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • ›All tools
  • ›Sites & Blogs
  • ›LLMs & Models
  • ›Compare
  • ›Chatbots
  • ›AI Images
  • ›Code & Dev

Company

  • ›Premium
  • ›About
  • ›Contact
  • ›Blog

Legal

  • ›Legal notice
  • ›Privacy
  • ›Terms

© 2026 AI Explorer·All rights reserved.

HomeLLMssplade v3

splade v3

by naver

Open source · 116k downloads · 93 likes

2.5
(93 reviews)EmbeddingAPI & Local
About

SPLADE v3 is an advanced sparse encoding model designed for semantic search and information retrieval. It converts sentences or paragraphs into high-dimensional sparse vectors (30,522 dimensions), enabling efficient indexing and search in large text corpora. Unlike dense approaches, it produces clear and interpretable representations, where each dimension corresponds to a term in the vocabulary, thereby enhancing the explainability of results. This model stands out for its optimized training, combining distillation techniques and margin learning to significantly improve its accuracy compared to previous versions. It is particularly well-suited for search systems requiring both performance and transparency, such as search engines or question-answering tools.

Documentation

SPLADE-v3

SPLADE-v3 is the latest series of SPLADE models.

This checkpoint corresponds to a model that starts from SPLADE++SelfDistil (naver/splade-cocondenser-selfdistil), and is trained with a mix of KL-Div and MarginMSE, with 8 negatives per query sampled from SPLADE++SelfDistil. We used the original MS MARCO collection without the titles.

For more details, see our arXiv companion book: https://arxiv.org/abs/2403.06789
To use SPLADE, please visit our GitHub repository: https://github.com/naver/splade

Performance

MRR@10 (MS MARCO dev)avg nDCG@10 (BEIR-13)
naver/splade-v340.251.7

Model Details

This is a SPLADE Sparse Encoder model. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: naver/splade-cocondenser-selfdistil
  • Maximum Sequence Length: 512 tokens (256 for evaluation reproduction)
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product

Full Model Architecture

CSS
SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM 
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

Bash
pip install -U sentence-transformers

Then you can load this model and run inference.

Python
from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("naver/splade-v3")
# Run inference
queries = ["what causes aging fast"]
documents = [
    "UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again â\x80\x93 single words and multiple bullets.",
    "Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly â\x80\x94 or who experiences a sudden decline â\x80\x94 should see his or her doctor.",
    "Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[11.3768, 10.8296,  4.3457]])


Citation

If you use our checkpoint, please cite our work:

INI
@misc{lassance2024spladev3,
      title={SPLADE-v3: New baselines for SPLADE}, 
      author={Carlos Lassance and Hervé Déjean and Thibault Formal and Stéphane Clinchant},
      year={2024},
      eprint={2403.06789},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
Capabilities & Tags
sentence-transformerspytorchbertspladesparse-encodersparsefeature-extractionentext-embeddings-inferenceendpoints_compatible
Links & Resources
Specifications
CategoryEmbedding
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Rating
2.5

Try splade v3

Access the model directly