by K-Compression
Open source · 818k downloads · 0 likes
HyperCLOVAX SEED Think 14B GPTQ is an advanced language model optimized for complex reasoning tasks through a 4-bit quantized architecture. Designed to balance performance and efficiency, it excels at generating structured and nuanced responses, particularly in contexts requiring deep analysis or critical thinking. Its primary use cases include writing assistance, synthesis of technical or scientific information, and decision support for demanding projects. What sets it apart is its ability to maintain high response quality while minimizing resource requirements, thanks to intelligent quantization of model weights. Ideal for environments where precision and speed are critical, it caters to developers, researchers, and professionals seeking a powerful yet accessible tool.
This repo contains GPTQ model files for HyperCLOVAX-SEED-Think-14B.
HyperCLOVAX-SEED-Think-14B-GPTQ was quantized using gptqmodel v4.0.0, following the guide.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "K-Compression/HyperCLOVAX-SEED-Think-14B-GPTQ"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="bfloat16",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
| Model | MMLU (0-shot) | HAERAE (0-shot) |
|---|---|---|
| HyperCLOVA X SEED 14B Think | 0.7144 | 0.8130 |
| HyperCLOVA X SEED 14B Think-GPTQ | 0.7018 | 0.8139 |
The model is licensed under HyperCLOVA X SEED Model License Agreement