AI/EXPLORER
OutilsCatégoriesSitesLLMsComparerQuiz IAAlternativesPremium
—Outils IA
—Sites & Blogs
—LLMs & Modèles
—Catégories
AI Explorer

Trouvez et comparez les meilleurs outils d'intelligence artificielle pour vos projets.

Fait avecen France

Explorer

  • ›Tous les outils
  • ›Sites & Blogs
  • ›LLMs & Modèles
  • ›Comparer
  • ›Chatbots
  • ›Images IA
  • ›Code & Dev

Entreprise

  • ›Premium
  • ›À propos
  • ›Contact
  • ›Blog

Légal

  • ›Mentions légales
  • ›Confidentialité
  • ›CGV

© 2026 AI Explorer·Tous droits réservés.

AccueilLLMsLlama 3.2 1B Instruct

Llama 3.2 1B Instruct

par unsloth

Open source · 144k downloads · 92 likes

2.5
(92 avis)ChatAPI & Local
À propos

Le modèle Llama 3.2 1B Instruct est une version légère et optimisée de la famille Llama 3.2, conçue pour des interactions conversationnelles multilingues. Il excelle dans des tâches comme le dialogue, la synthèse de texte et la récupération d’informations, tout en restant performant sur des benchmarks standards. Grâce à son architecture optimisée et à des techniques de fine-tuning avancées, il offre un équilibre entre efficacité et qualité de réponse, même sur des appareils modestes. Ses cas d’usage incluent les assistants virtuels, les outils de résumé automatique ou les systèmes de support client multilingues. Ce modèle se distingue par sa capacité à être facilement personnalisé pour des besoins spécifiques, tout en garantissant une utilisation responsable et sécurisée.

Documentation

See our collection for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.

Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!

We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing

unsloth/Llama-3.2-1B-Instruct

For more details on the model, please go to Meta's original model card

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supportsFree NotebooksPerformanceMemory use
Llama-3.2 (3B)▶️ Start on Colab2.4x faster58% less
Llama-3.1 (11B vision)▶️ Start on Colab2.4x faster58% less
Llama-3.1 (8B)▶️ Start on Colab2.4x faster58% less
Phi-3.5 (mini)▶️ Start on Colab2x faster50% less
Gemma 2 (9B)▶️ Start on Colab2.4x faster58% less
Mistral (7B)▶️ Start on Colab2.2x faster62% less
DPO - Zephyr▶️ Start on Colab1.9x faster19% less
  • This conversational notebook is useful for ShareGPT ChatML / Vicuna templates.
  • This text completion notebook is for raw text. This DPO notebook replicates Zephyr.
  • * Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.

Special Thanks

A huge thank you to the Meta and Llama team for creating and releasing these models.

Model Information

The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.

Model developer: Meta

Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.

Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.

Model Release Date: Sept 25, 2024

Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.

License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).

Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.

Liens & Ressources
Spécifications
CatégorieChat
AccèsAPI & Local
LicenceOpen Source
TarificationOpen Source
Paramètres1B parameters
Note
2.5

Essayer Llama 3.2 1B Instruct

Accédez directement au modèle