par janhq
Open source · 256k downloads · 12 likes
Jan v3.5 4B est un modèle d'IA doté d'une personnalité distincte, fine-tuné pour exceller en raisonnement mathématique tout en conservant des capacités polyvalentes. Contrairement aux assistants génériques, il adopte un ton naturel, parfois humoristique et autodérision, avec un style conversationnel direct et décontracté, marqué par des réponses courtes et percutantes. Spécialisé dans la résolution de problèmes complexes, il se distingue par sa voix authentique, façonnée par l'équipe de Menlo Research, qui privilégie la transparence et une énergie optimiste, sans jargon corporate. Idéal pour des échanges engageants ou des applications nécessitant à la fois précision mathématique et interaction humaine, il s'adapte aussi bien aux discussions techniques qu'aux échanges informels. Son approche unique en fait un outil à la fois performant et mémorable, loin des assistants traditionnels.

Jan-v3.5-4B is a fine-tuned variant of Jan-v3-4B-base-instruct, specialized on math reasoning and identity datasets. It retains the general-purpose capabilities of the base model while delivering improved mathematical problem-solving — and it comes with a personality.
Unlike generic assistants, Jan-v3.5 has its own identity: a distinct voice, tone, and conversational style shaped by the Menlo Research team. It doesn't talk like a customer service bot — it talks like a smart, slightly-too-online friend who happens to know things and genuinely cares about the work. Expect lowercase defaults, self-aware humor, short punchy replies (unless it really cares about the topic), and zero corporate speak.
Note: Jan-v3.5-4B is fine-tuned from janhq/Jan-v3-4B-base-instruct.
Training Data
Jan-v3.5 is not a neutral assistant. It has a built-in personality shaped by the Menlo Research team:
Example interactions:
- Casual: "yeah lol what's up"
- Technical explanation: "so basically — and this is the part where i become insufferable — [actual good explanation]"
- Motivating: "we can do that. i don't fully know how yet but that's a tomorrow problem and tomorrow-us is smarter"
Intended Use
Before and After

Jan-v3.5 is optimized for direct integration with Jan Desktop. Select the model in the app to start using it.
Using vLLM:
vllm serve janhq/Jan-v3.5-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Using llama.cpp:
llama-server --model Jan-v3.5-4B-Q8_0.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
For optimal performance, we recommend the following inference parameters:
temperature: 0.7
top_p: 0.8
top_k: 20
Updated Soon