AI ExplorerAI Explorer
ToolsCategoriesSitesLLMsCompareAI QuizAlternativesPremium

—

AI Tools

—

Sites & Blogs

—

LLMs & Models

—

Categories

AI Explorer

Find and compare the best artificial intelligence tools for your projects.

Made within France

Explore

  • All tools
  • Sites & Blogs
  • LLMs & Models
  • Compare
  • Chatbots
  • AI Images
  • Code & Dev

Company

  • Premium
  • About
  • Contact
  • Blog

Legal

  • Legal notice
  • Privacy
  • Terms

© 2026 AI Explorer. All rights reserved.

HomeLLMsSolar Open 100B

Solar Open 100B

by upstage

Open source · 351k downloads · 468 likes

3.3
(468 reviews)ChatAPI & Local
About

Solar Open 100B is an advanced language model developed by Upstage, featuring 102 billion parameters and built on a Mixture-of-Experts (MoE) architecture. Fully trained from scratch on 19.7 trillion tokens, it excels in reasoning, instruction comprehension, and agentic capabilities while maintaining high transparency and customization tailored for the open-source community. Thanks to its MoE design, it balances the deep knowledge of a large model with the efficiency and fast inference speed of a more compact model, making it ideal for demanding professional applications. Available under the Upstage Solar license, it stands out for its flexibility, performance, and adaptability across diverse use cases, from data analysis to automating complex tasks. A quantized INT4 version is also offered to optimize deployment.

Documentation

Solar Open Model

Solar Open

Solar Open is Upstage's flagship 102B-parameter large language model, trained entirely from scratch and released under the Upstage Solar License (see LICENSE for details). As a Mixture-of-Experts (MoE) architecture, it delivers enterprise-grade performance in reasoning, instruction-following, and agentic capabilities—all while prioritizing transparency and customization for the open-source community.

Technical Report | Project Page

Highlights

  • MoE Architecture (102B / 12B): Built on a Mixture-of-Experts architecture with 102B total / 12B active parameters. This design delivers the knowledge depth of a massive model with the inference speed and cost-efficiency of a much smaller model.
  • Massive Training Scale: Pre-trained on 19.7 trillion tokens, ensuring broad knowledge coverage and robust reasoning capabilities across various domains.
  • Quantized Version Available: An official INT4 quantized model is provided by NotaAI and available at nota-ai/Solar-Open-100B-NotaMoEQuant-Int4.

Model Overview

  • Model Name: Solar Open 100B
  • Hugging Face ID: Upstage/Solar-Open-100B
  • Architecture: Mixture-of-Experts (MoE)
    • Total Parameters: 102.6B
    • Active Parameters: 12B (per token)
    • Experts: 129 Experts (top 8 among 128 Routed + 1 Shared)
  • Pre-training Tokens: 19.7 Trillion
  • Context Length: 128k
  • Training Hardware: NVIDIA B200 GPUs
  • License: Upstage Solar License (See LICENSE)
  • Hardware Requirements:
    • Minimum: 4x NVIDIA A100 (80GB)

For more details, please refer to the Solar Open Technical Report.

License

This repository contains both model weights and code, which are licensed under different terms:

  1. MODEL WEIGHTS (*.safetensors) Licensed under Upstage Solar License See: https://huggingface.co/upstage/Solar-Open-100B/blob/main/LICENSE

  2. CODE (*.py, *.json, *.jinja files) Licensed under Apache License 2.0 See: https://www.apache.org/licenses/LICENSE-2.0

Performance

Korean Benchmarks

CategoryBenchmarksSolar Open (102B)gpt-oss-120b (117B, high)gpt-oss-120b (117B, medium)GLM-4.5-Air (110B)
GeneralKMMLU73.072.770.370.2
KMMLU-Pro64.062.660.560.7
CLIcK78.977.272.948.3
HAE-RAE v1.173.370.869.642.6
KoBALT44.352.645.040.3
FinanceKBankMMLU (in-house)65.562.561.564.7
LawKBL65.562.860.160.6
MedicalKorMedMCQA84.475.876.380.5
MathKo-AIME 2024 (in-house)80.390.076.780.0
Ko-AIME 2025 (in-house)80.090.070.083.3
HRM8K87.689.584.886.0
IFKo-IFEval87.593.286.779.5
PreferenceKo Arena Hard v2 (in-house)79.979.573.860.4

English Benchmarks

CategoryBenchmarksSolar Open (102B)gpt-oss-120b (117B, high)gpt-oss-120b (117B, medium)GLM-4.5-Air (110B)
GeneralMMLU88.288.687.983.3
MMLU-Pro80.480.478.681.4
GPQA-Diamond68.178.069.475.8
HLE (text only)10.518.47.2310.8
MathAIME 202491.794.377.788.7
AIME 202584.391.775.082.7
HMMT 2025 (Feb)73.380.063.366.7
HMMT 2025 (Nov)80.073.366.770.0
CodeLiveCodeBench (v1–v6 cumul)74.289.982.871.9
IFIFBench53.770.861.237.8
IFEval88.091.486.586.5
PreferenceArena Hard v274.879.672.762.5
Writing Bench7.516.616.557.40
AgentTau² Airline52.456.052.860.8
Tau² Telecom55.657.747.428.1
Tau² Retail59.376.568.471.9
LongAA-LCR35.048.345.037.3

Inference Quickstart

We recommend using the following generation parameters:

INI
temperature=0.8
top_p=0.95
top_k=50

Transformers

Install the required dependencies:

Bash
pip install -U "transformers>=5.0" kernels torch accelerate

Run inference with the following code:

Python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

MODEL_ID = "upstage/Solar-Open-100B"

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)

model = AutoModelForCausalLM.from_pretrained(
    pretrained_model_name_or_path=MODEL_ID,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

# Prepare input
messages = [{"role": "user", "content": "who are you?"}]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt",
)
inputs = inputs.to(model.device)

# Generate response
generated_ids = model.generate(
    **inputs,
    max_new_tokens=4096,
    temperature=0.8,
    top_p=0.95,
    top_k=50,
    do_sample=True,
)
generated_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1] :])
print(generated_text)

vLLM

Option 1: Using Docker (Highly Recommended)

Docker is the recommended deployment method for running Solar-Open-100B.

Bash
# For 8 GPUs
docker run --gpus all \
    --ipc=host \
    -p 8000:8000 \
    upstage/vllm-solar-open:latest \
    upstage/Solar-Open-100B \
    --trust-remote-code \
    --enable-auto-tool-choice \
    --tool-call-parser solar_open \
    --reasoning-parser solar_open \
    --logits-processors vllm.model_executor.models.parallel_tool_call_logits_processor:ParallelToolCallLogitsProcessor \
    --logits-processors vllm.model_executor.models.solar_open_logits_processor:SolarOpenTemplateLogitsProcessor \
    --tensor-parallel-size 8

Option 2: Installing from Source

For development, debugging, custom modifications or offline inference, Solar Open can also be run using a source installation of vLLM. We recommend using uv for environment management and dependency resolution.

Create and activate a Python virtual environment

Bash
uv venv --python 3.12 --seed
source .venv/bin/activate

Install Solar Open's optimized vLLM

Bash
VLLM_PRECOMPILED_WHEEL_LOCATION="https://github.com/vllm-project/vllm/releases/download/v0.12.0/vllm-0.12.0-cp38-abi3-manylinux_2_31_x86_64.whl" \
VLLM_USE_PRECOMPILED=1 \
uv pip install git+https://github.com/UpstageAI/[email protected]

Start the vLLM server (For 8 GPUs)

Bash
vllm serve upstage/Solar-Open-100B \
    --trust-remote-code \
    --enable-auto-tool-choice \
    --tool-call-parser solar_open \
    --reasoning-parser solar_open \
    --logits-processors vllm.model_executor.models.parallel_tool_call_logits_processor:ParallelToolCallLogitsProcessor \
    --logits-processors vllm.model_executor.models.solar_open_logits_processor:SolarOpenTemplateLogitsProcessor \
    --tensor-parallel-size 8

Citation

If you use Solar Open in your research, please cite:

Bibtex
@article{park2025solar,
  title={Solar Open Technical Report},
  author={Sungrae Park and Sanghoon Kim and Jungho Cho and Gyoungjin Gim and Dawoon Jung and Mikyoung Cha and Eunhae Choo and Taekgyu Hong and Minbyul Jeong and SeHwan Joo and Minsoo Khang and Eunwon Kim and Minjeong Kim and Sujeong Kim and Yunsu Kim and Hyeonju Lee and Seunghyun Lee and Sukyung Lee and Siyoung Park and Gyungin Shin and Inseo Song and Wonho Song and Seonghoon Yang and Seungyoun Yi and Sanghoon Yoon and Jeonghyun Ko and Seyoung Song and Keunwoo Choi and Hwalsuk Lee and Sunghun Kim and Du-Seong Chang and Kyunghyun Cho and Junsuk Choe and Hwaran Lee and Jae-Gil Lee and KyungTae Lim and Alice Oh},
  journal={arXiv preprint arXiv:2601.07022},
  year={2025},
  url={https://huggingface.co/papers/2601.07022}
}
Capabilities & Tags
transformerssafetensorssolar_opentext-generationupstagesolarmoe100bllmconversational
Links & Resources
Specifications
CategoryChat
AccessAPI & Local
LicenseOpen Source
PricingOpen Source
Parameters100B parameters
Rating
3.3

Try Solar Open 100B

Access the model directly