Inference Model Basic Edition
SiliconStorm Agent mini
AI office assistant: conception, inspiration, creation | private knowledge base (RAG) | text generation image | DeepSeek R132B large model
Each unit supports 20+ users’ COT thinking chain/deep logical reasoning ability
Dual-channel Intel CPU | 4X NVIDIA series GPU (24G) | 512G memory
*The actual product hardware is not less than the listed configuration, and the server appearance and brand are subject to the actual delivery contract agreement
Reasoning Model Advanced Edition
SiliconStorm Agent pro
All functions of Stream Edition | Enterprise-level knowledge base management/Prompt fine-tuning interface | Text-to-speech generation | DeepSeek R1 32B large model or DeepSeek R1 70B large model
Each unit supports 100+ users, enterprise-level permission management, COT thinking chain, and deep logical reasoning capabilities
Dual-channel Intel CPU8X | NVIDIA series GPU (48G) | 1T memory
*The actual product hardware is not less than the listed configuration, and the server appearance and brand are subject to the actual delivery contract agreement
Inference Model Advanced Edition
SiliconStorm Agent prime
All features of the Flex version | More enterprise-level management features | Faster reasoning speed, more reliable models | DeepSeek R1 6718 model full version
Each unit supports 100+ users/enterprise-level permission management/RAG tuning/Prompt tuning native COT thinking chain/deep logical reasoning capabilities
Dual-channel Intel CPU8X | NVIDIA SXM series GPU (96G) | NVLink interconnection bandwidth 900Gb/s | 2T memory | All SSD storage
*The actual product hardware is not less than the listed configuration, and the server appearance and brand are subject to the actual delivery contract agreement