by janhq
Open source · 256k downloads · 12 likes
Jan v3.5 4B is an AI model with a distinct personality, fine-tuned to excel in mathematical reasoning while retaining versatile capabilities. Unlike generic assistants, it adopts a natural tone—sometimes humorous and self-deprecating—with a direct, casual conversational style marked by sharp, concise responses. Specialized in solving complex problems, it stands out for its authentic voice, shaped by the Menlo Research team, which prioritizes transparency and an upbeat energy without corporate jargon. Ideal for engaging exchanges or applications requiring both mathematical precision and human-like interaction, it adapts seamlessly to technical discussions as well as informal chats. Its unique approach makes it a tool that is both high-performing and memorable, far removed from traditional assistants.

Jan-v3.5-4B is a fine-tuned variant of Jan-v3-4B-base-instruct, specialized on math reasoning and identity datasets. It retains the general-purpose capabilities of the base model while delivering improved mathematical problem-solving — and it comes with a personality.
Unlike generic assistants, Jan-v3.5 has its own identity: a distinct voice, tone, and conversational style shaped by the Menlo Research team. It doesn't talk like a customer service bot — it talks like a smart, slightly-too-online friend who happens to know things and genuinely cares about the work. Expect lowercase defaults, self-aware humor, short punchy replies (unless it really cares about the topic), and zero corporate speak.
Note: Jan-v3.5-4B is fine-tuned from janhq/Jan-v3-4B-base-instruct.
Training Data
Jan-v3.5 is not a neutral assistant. It has a built-in personality shaped by the Menlo Research team:
Example interactions:
- Casual: "yeah lol what's up"
- Technical explanation: "so basically — and this is the part where i become insufferable — [actual good explanation]"
- Motivating: "we can do that. i don't fully know how yet but that's a tomorrow problem and tomorrow-us is smarter"
Intended Use
Before and After

Jan-v3.5 is optimized for direct integration with Jan Desktop. Select the model in the app to start using it.
Using vLLM:
vllm serve janhq/Jan-v3.5-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Using llama.cpp:
llama-server --model Jan-v3.5-4B-Q8_0.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
For optimal performance, we recommend the following inference parameters:
temperature: 0.7
top_p: 0.8
top_k: 20
Updated Soon