Qwen2.5-HA-0.5B-Instruct is a smart home model fine-tuned based on Qwen2.5-0.5B-Instruct, with approximately 500 million parameters. The main features of this model include:
Model Type: Causal Language Model
Training Stages: Pre-training and post-training
Architecture: Transformer, using RoPE, SwiGLU, RMSNorm, Attention QKV bias, and tied word embeddings
Number of Parameters: 490 million (360 million non-embedding parameters)
Number of Layers: 24 layers
Number of Attention Heads (GQA): 14 query heads, 2 key-value heads
Context Length: Supports a full 32,768 tokens, with a maximum generation of 8,192 tokens
This model shows significant improvements in instruction understanding, long-text generation, and structured data comprehension. It supports multilingual capabilities across 29 languages, including English, Chinese, and French. The model has been fine-tuned on smart home datasets and can produce structured outputs by setting appropriate system prompts.
Available NPU Models
Home Assistant Fine-tuned Models
qwen2.5-ha-0.5b-ctx-ax630c
Compared to the base model, it provides a longer context and stable structured output in Home Assistant–specific JSON format