Tencent releases versatile open-source Hunyuan AI models

Tencent has expanded its family of open-source Hunyuan AI models that are versatile enough for broad use. This new family of models is engineered to deliver powerful performance across computational environments, from small edge devices to demanding, high-concurrency production systems.

The release includes a comprehensive set of pre-trained and instruction-tuned models available on the developer platform Hugging Face. The models come in several sizes, specifically with parameter scales of 0.5B, 1.8B, 4B, and 7B, providing substantial flexibility for developers and businesses.

Tencent has indicated that these models were developed using training strategies similar to its more powerful Hunyuan-A13B model, allowing them to inherit its performance characteristics. This approach enables users to select the optimal model for their needs, whether it’s a smaller variant for resource-constrained edge computing or a larger model for high-throughput production workloads, all while ensuring strong capabilities.

One of the most notable features of the Hunyuan series is its native support for an ultra-long 256K context window. This allows the models to handle and maintain stable performance on long-text tasks, a vital capability for complex document analysis, extended conversations, and in-depth content generation. The models support what Tencent calls “hybrid reasoning,” which allows for both fast and slow thinking modes that users can choose between depending on their specific requirements.

The company has also placed a strong emphasis on agentic capabilities. The models have been optimised for agent-based tasks and have demonstrated leading results on established benchmarks such as BFCL-v3, τ-Bench, and C3-Bench, suggesting a high degree of proficiency in complex, multi-step problem-solving. For instance, on the C3-Bench, the Hunyuan-7B-Instruct model achieves a score of 68.5, while the Hunyuan-4B-Instruct model scores 64.3.

The series’ performance is a focus on efficient inference. Tencent’s Hunyuan models utilise Grouped Query Attention (GQA), a technique known for improving processing speed and reducing computational overhead. This efficiency is further enhanced by advanced quantisation support, a key element of the Hunyuan architecture designed to lower deployment barriers.

Tencent has developed its own compression toolset, AngleSlim, to create a more user-friendly and effective model compression solution. Using this tool, the company offers two main types of quantisation for the Hunyuan series.

The first is FP8 static quantisation, which employs an 8-bit floating-point format. This method uses a small amount of calibration data to pre-determine the quantisation scale without requiring full retraining, converting model weights and activation values into the FP8 format to boost inference efficiency.

The second method is INT4 quantisation, which achieves W4A16 quantisation through the GPTQ and AWQ algorithms:

  • The GPTQ approach processes model weights layer by layer, using calibration data to minimise errors in the quantised weights. This process avoids requiring model retraining and improves inference speed.
  • The AWQ algorithm works by statistically analysing the amplitude of activation values from a small set of calibration data. It then calculates a scaling coefficient for each weight channel, which expands the numerical range of important weights to retain more information during the compression process. 

Developers can either use the AngleSlim tool themselves or download the pre-quantised models directly.

Performance benchmarks confirm the strong capabilities of the Tencent Hunyuan models across a range of tasks. The pre-trained Hunyuan-7B model, for example, achieves a score of 79.82 on the MMLU benchmark, 88.25 on GSM8K, and 74.85 on the MATH benchmark, demonstrating solid reasoning and mathematical skills.

The instruction-tuned variants show impressive results in specialised areas. In mathematics, the Hunyuan-7B-Instruct model scores 81.1 on the AIME 2024 benchmark, while the 4B version scores 78.3. In science, the 7B model reaches 76.5 on OlympiadBench, and in coding, it scores 42 on Livecodebench.

The quantisation benchmarks show minimal performance degradation. On the DROP benchmark, the Hunyuan-7B-Instruct model scores 85.9 in its base B16 format, 86.0 with FP8, and 85.7 with Int4 GPTQ, indicating that efficiency gains do not come at a cost to accuracy.

For deployment, Tencent recommends using established frameworks like TensorRT-LLM, vLLM, or SGLang to serve the Hunyuan models and create OpenAI-compatible API endpoints, ensuring they can be integrated smoothly into existing development workflows. This combination of performance, efficiency, and deployment flexibility positions the Hunyuan series as a continuing powerful contender in open-source AI.

See also: Deep Cogito v2: Open-source AI that hones its reasoning skills

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tencent releases versatile open-source Hunyuan AI models appeared first on AI News.

Leave a Comment

Your email address will not be published. Required fields are marked *