AI/EXPLORER
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium
—Outils IA
—Sites & Blogs
—LLMs & Modèles
—Catégories
AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • ›Tous les outils
  • ›Sites & Blogs
  • ›LLMs & Modèles
  • ›Comparer
  • ›Chatbots
  • ›Images IA
  • ›Code & Dev

Entreprise

  • ›Premium
  • ›À propos
  • ›Contact
  • ›Blog

Légal

  • ›Mentions légales
  • ›Confidentialité
  • ›CGV

© 2026 AI Explorer·Tous droits réservés.

AccueilLLMsJobBERT v3

JobBERT v3

par TechWolf

Open source · 49k downloads · 19 likes

1.6
(19 avis)EmbeddingAPI & Local
À propos

JobBERT v3 est un modèle de langage spécialisé dans la compréhension et l'appariement de titres et descriptions de postes professionnels. Entraîné sur des millions de paires multilingues (anglais, espagnol, allemand et chinois), il convertit les offres d'emploi en vecteurs numériques pour mesurer leur similarité sémantique. Idéal pour les tâches de recrutement comme la correspondance de postes, la recherche de similitudes entre emplois ou l'analyse de compétences, il excelle dans la détection de correspondances pertinentes même entre formulations différentes. Ce qui le distingue, c'est sa capacité à fonctionner de manière cross-lingue tout en capturant les nuances spécifiques aux métiers, offrant ainsi une solution robuste pour les plateformes RH internationales.

Documentation

SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2

This is a sentence-transformers model specifically trained for job title matching and similarity. It's finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2 on a large dataset of job titles and their associated skills/requirements across multiple languages. The model maps English, Spanish, German and Chinese job titles and descriptions to a 1024-dimensional dense vector space and can be used for semantic job title matching, job similarity search, and related HR/recruitment tasks.

The model was presented in the paper Multilingual JobBERT for Cross-Lingual Job Title Matching.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
  • Maximum Sequence Length: 64 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset: 4 x 5.2M high-quality job title - skills pairs in English, Spanish, German and Chinese

Model Sources

  • Documentation: Sentence Transformers Documentation
  • Repository: Sentence Transformers on GitHub
  • Hugging Face: Sentence Transformers on Hugging Face

Full Model Architecture

Python
SentenceTransformer(
  (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Asym(
    (anchor-0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
    (positive-0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

Bash
pip install -U sentence-transformers

Then you can load and use the model with the following code:

Python
import torch
import numpy as np
from tqdm.auto import tqdm
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import batch_to_device, cos_sim

# Load the model
model = SentenceTransformer("TechWolf/JobBERT-v3")

def encode_batch(jobbert_model, texts):
    features = jobbert_model.tokenize(texts)
    features = batch_to_device(features, jobbert_model.device)
    features["text_keys"] = ["anchor"]
    with torch.no_grad():
        out_features = jobbert_model.forward(features)
    return out_features["sentence_embedding"].cpu().numpy()

def encode(jobbert_model, texts, batch_size: int = 8):
    # Sort texts by length and keep track of original indices
    sorted_indices = np.argsort([len(text) for text in texts])
    sorted_texts = [texts[i] for i in sorted_indices]
    
    embeddings = []
    
    # Encode in batches
    for i in tqdm(range(0, len(sorted_texts), batch_size)):
        batch = sorted_texts[i:i+batch_size]
        embeddings.append(encode_batch(jobbert_model, batch))
    
    # Concatenate embeddings and reorder to original indices
    sorted_embeddings = np.concatenate(embeddings)
    original_order = np.argsort(sorted_indices)
    return sorted_embeddings[original_order]

# Example usage
job_titles = [
    'Software Engineer',
    '高级软件开发人员',  # senior software developer
    'Produktmanager',  # product manager
    'Científica de datos'  # data scientist
]

# Get embeddings
embeddings = encode(model, job_titles)

# Calculate cosine similarity matrix
similarities = cos_sim(embeddings, embeddings)
print(similarities)

The output will be a similarity matrix where each value represents the cosine similarity between two job titles:

Lua
tensor([[1.0000, 0.8087, 0.4673, 0.5669],
        [0.8087, 1.0000, 0.4428, 0.4968],
        [0.4673, 0.4428, 1.0000, 0.4292],
        [0.5669, 0.4968, 0.4292, 1.0000]])

Training Details

Training Dataset

  • Size: 21,123,868 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchorpositive
    typestringstring
    details
    • min: 4 tokens
    • mean: 10.56 tokens
    • max: 38 tokens
    • min: 19 tokens
    • mean: 61.08 tokens
    • max: 64 tokens
  • Samples:
    anchorpositive
    通信与培训专员deliver online training, liaise with educational support staff, interact with an audience, construct individual learning plans, lead a team, develop corporate training programmes, learning technologies, communication, identify with the company's goals, address an audience, learning management systems, use presentation software, motivate others, provide learning support, engage with stakeholders, identify skills gaps, meet expectations of target audience, develop training programmes
    Associate Infrastructure Engineercreate solutions to problems, design user interface, cloud technologies, use databases, automate cloud tasks, keep up-to-date to computer trends, work in teams, use object-oriented programming, keep updated on innovations in various business fields, design principles, Angular, adapt to changing situations, JavaScript, Agile development, manage stable, Swift (computer programming), keep up-to-date to design industry trends, monitor technology trends, web programming, provide mentorship, advise on efficiency improvements, adapt to change, JavaScript Framework, database management systems, stimulate creative processes
    客户顾问/出纳customer service, handle financial transactions, adapt to changing situations, have computer literacy, manage cash desk, attend to detail, provide customer guidance on product selection, perform multiple tasks at the same time, carry out financial transactions, provide membership service, manage accounts, adapt to change, identify customer's needs, solve problems
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    JSON
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "mini_batch_size": 512
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • overwrite_output_dir: True
  • per_device_train_batch_size: 2048
  • per_device_eval_batch_size: 2048
  • num_train_epochs: 1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: True
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 2048
  • per_device_eval_batch_size: 2048
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

EpochStepTraining Loss
0.04855003.89
0.096910003.373
0.145415003.1715
0.193920003.0414
0.242425002.9462
0.290830002.8691
0.339335002.8048
0.387840002.7501
0.436345002.7026
0.484750002.6601
0.533255002.6247
0.581760002.5951
0.630265002.5692
0.678670002.5447
0.727175002.5221
0.775680002.5026
0.824085002.4912
0.872590002.4732
0.921095002.4608
0.9695100002.4548

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 1.944 kWh
  • Carbon Emitted: 0.717 kg of CO2
  • Hours Used: 5.34 hours

Training Hardware

  • On Cloud: Yes
  • GPU Model: 1 x NVIDIA A100-SXM4-40GB
  • CPU Model: Intel(R) Xeon(R) CPU @ 2.20GHz
  • RAM Size: 83.48 GB

Framework Versions

  • Python: 3.10.16
  • Sentence Transformers: 4.1.0
  • Transformers: 4.48.3
  • PyTorch: 2.6.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.5.1
  • Tokenizers: 0.21.0

Citation

BibTeX

JobBERT-v3 Paper

Bibtex
@misc{decorte2025multilingualjobbertcrosslingualjob,
      title={Multilingual JobBERT for Cross-Lingual Job Title Matching}, 
      author={Jens-Joris Decorte and Matthias De Lange and Jeroen Van Hautte},
      year={2025},
      eprint={2507.21609},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.21609}, 
}

Sentence Transformers

Bibtex
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedMultipleNegativesRankingLoss

Bibtex
@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Liens & Ressources
Spécifications
CatégorieEmbedding
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Note
1.6

Essayer JobBERT v3

Accédez directement au modèle