AI/EXPLORER
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium
—Outils IA
—Sites & Blogs
—LLMs & Modèles
—Catégories
AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • ›Tous les outils
  • ›Sites & Blogs
  • ›LLMs & Modèles
  • ›Comparer
  • ›Chatbots
  • ›Images IA
  • ›Code & Dev

Entreprise

  • ›Premium
  • ›À propos
  • ›Contact
  • ›Blog

Légal

  • ›Mentions légales
  • ›Confidentialité
  • ›CGV

© 2026 AI Explorer·Tous droits réservés.

AccueilLLMsSapBERT from PubMedBERT fulltext mean token

SapBERT from PubMedBERT fulltext mean token

par cambridgeltl

Open source · 395k downloads · 2 likes

0.6
(2 avis)EmbeddingAPI & Local
À propos

SapBERT from PubMedBERT fulltext mean token est un modèle de langage spécialisé dans l'extraction et la normalisation d'entités biomédicales à partir de textes scientifiques. Il identifie et relie automatiquement des termes médicaux ou biologiques à des concepts standardisés, comme ceux des ontologies UMLS ou MeSH, facilitant ainsi l'analyse sémantique de documents cliniques ou de recherche. Ses principales capacités incluent la reconnaissance d'entités nommées, la désambiguïsation et la mise en correspondance avec des bases de connaissances médicales, même pour des termes rares ou techniques. Ce modèle est particulièrement utile pour structurer des données biomédicales, automatiser l'annotation de corpus ou améliorer la recherche d'informations dans des publications scientifiques. Ce qui le distingue, c'est son entraînement sur des textes complets issus de PubMed, lui permettant de mieux comprendre le contexte et les nuances des termes médicaux que les approches basées uniquement sur des titres ou des résumés.

Documentation

language: en

tags:

  • biomedical
  • lexical-semantics

datasets:

  • UMLS [news] A cross-lingual extension of SapBERT will appear in the main onference of ACL 2021!
    [news] SapBERT will appear in the conference proceedings of NAACL 2021!

SapBERT-PubMedBERT

SapBERT by Liu et al. (2020). Trained with UMLS 2020AA (English only), using microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext as the base model. Please use the mean-pooling of the output as the representation.

Extracting embeddings from SapBERT

The following script converts a list of strings (entity names) into embeddings.

Python
import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel  

tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token")  
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token").cuda()

# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"] 

bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
    toks = tokenizer.batch_encode_plus(all_names[i:i+bs], 
                                       padding="max_length", 
                                       max_length=25, 
                                       truncation=True,
                                       return_tensors="pt")
    toks_cuda = {}
    for k,v in toks.items():
        toks_cuda[k] = v.cuda()
    cls_rep = model(**toks_cuda)[0].mean(1)# use mean pooling representation as the embedding
    all_embs.append(cls_rep.cpu().detach().numpy())

all_embs = np.concatenate(all_embs, axis=0)

For more details about training and eval, see SapBERT github repo.

Citation

Bibtex
@inproceedings{liu-etal-2021-self,
    title = "Self-Alignment Pretraining for Biomedical Entity Representations",
    author = "Liu, Fangyu  and
      Shareghi, Ehsan  and
      Meng, Zaiqiao  and
      Basaldella, Marco  and
      Collier, Nigel",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
    pages = "4228--4238",
    abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
}
Liens & Ressources
Spécifications
CatégorieEmbedding
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Note
0.6

Essayer SapBERT from PubMedBERT fulltext mean token

Accédez directement au modèle