Beijing Solution
Computing Infrastructure
Computing Infrastructure
Product
Digital Intelligence Computing Power Platform
User Pain Points
It addresses common bottlenecks in the AI industry's new phase—such as the difficult commercialization of domestic chips and challenges in deploying large-scale models in real-world scenarios—while also tackling systemic issues intelligent computing centers face, including chip selection, heterogeneous resource management, scheduling optimization, model adaptation, integrated training and inference, and scenario matching.
Solution Features
The platform tackles challenges in domestic computing power supply and demand through core technological breakthroughs like multi-level hybrid pooling and hybrid training-inference deployment, significantly enhancing the performance acceleration and optimization capabilities of heterogeneous domestic computing resources. Integrated with a high-quality AI toolchain, it employs a software-de- fined hardware approach to deeply adapt and accelerate large-scale model training and inference.
Target Users
Provincial and municipal governments, central and state-owned enterprises, leading industry enterprises, and small-to-medium enterprises.
Best Practices
Hybrid Cluster of Domestic Computing Chips
The platform manages a thousand-card-scale, heterogeneous AI computing cluster that incorporates diverse domestic chips, with compatibility for 11 types of domestic AI accelerators, and provides full support for training and inference of large models like DeepSeek and Qwen.
Heterogeneous Computing Power Acceleration & Optimization
Utilizes core breakthroughs like multi-level hybrid pooling and hybrid training-inference deployment to support unified management and scheduling of diverse computing resources (including eight flexible scheduling strategies). Additionally, it enables seamless model migration between different types of computing hardware.
Deep Adaptation & Acceleration for Large Model Training/Inference
Leveraging the architectural characteristics of hardware, the project developed core optimization technologies such as computation graph optimization, hardware-aware optimization, continuous batch processing optimization, prefix caching
optimization, and load balancing optimization. These optimizations significantly improve LLM training efficiency, inference speed, and overall performance on platforms, achieving.
  • Provider
  • Beijing Electronic Digital & IntelligenceTechnology Co., Ltd.
  • Contact
  • Li Xiao    18600013021     lixiao@bedicloud.com

  • Provider Profile
  • Beijing Electronic Digital & Intelligence (BEDI) is an industrial company established by Beijing Electronic Holding(BEHC)as part of its strategic expansion into the artificial intelligence field. BEDI specializes in original, revolutionary, and leading innovations. With the mission of "Building a Digital China", BEDI envisions creating a future-oriented AI base and AI productivity engine to accelerate China's next-generation industrial revolution. To achieve this, BEDI is building an innovative paradigm centered on "1 AI Base + 2 Industrial Platforms," aiming for breakthroughs in foundational capabilities through AI computing infrastructure and AI data services. Through its Traditional Industry Upgrade Platform and Emerging Industry Acceleration Platform, BEDI promotes the advancement of industrial AI and fosters the innovation of AI industrialization, thereby accelerating the deep integration of artificial intelligence with the real economy.