qihuiz93 f9e8c305e4 adjust doc name
Change-Id: I0190d1b4ec3c0cd6ce696bdd1e20ed1186737aa8
2025-07-11 11:23:33 +08:00
2023-11-30 17:53:30 +08:00
2024-09-18 15:26:09 +08:00
2022-11-14 14:04:01 +00:00
2023-08-08 18:02:45 +08:00
2025-07-11 11:22:06 +08:00
2025-07-11 11:23:33 +08:00

Heterogeneous-Accelerator Migration Sub-group

Project Facts

Project Creation Date: 2023/08/08

Primary Contact: Qihui Zhao, zhaoqihui@chinamobile.com

Project Lead: Qihui Zhao, zhaoqihui@chinamobile.com & Lei Huang, huangleiyjy@chinamobile.com

Committers:

Mailing List: computing-force-network@lists.opendev.org

Meetings: No sub-group meeting time. Use bi-weekly meeting of CFN WG.

Repository: https://opendev.org/cfn/computing-native

StoryBoard: https://storyboard.openstack.org/#!/project_group/computing-force-network

Open Bugs: N/A

Introduction

The intelligent computing ecology is mainly composed of middleware/framework + tool chain + hardware. Each vendors build corresponding tool chain around its own hardware, and generates branch version matching different AI framework.

The ecosystem is becoming diverse, cross-architecture and cross-stack migration of upper-layer applications is extremely complex, which brings development challenges to application developers, computing force service providers and chip vendors.

In order to facing ecological challenges, we proposed a technology named Heterogeneous-Accelerator Migration Technology , the goal of which is to break the existing compile-link-execute tightly coupled tool chain ecology, establish a new collaboration mechanism.Shield the underlying hardware differences, and realize cross-architecture non-sensing migration and execution of applications. Build traction model of the intelligent computing industry chain with software as the core, and prosper the ecology of intelligent computing industry.

The Heterogeneous-Accelerator Migration Technology architecture mainly consists of two layers: the heterogeneous-accelerator migration abstraction layer and the computing force pooling layer. Among them, the heterogeneous-accelerator migration abstraction layer mainly includes native interfaces based on a unified programming model and converters, as well as a hardware-native stack formed by a cross-architecture comprehensive compiler and runtime, generating a unified executable program format. The computing force pooling layer mainly consists of components for heterogeneous computing power registration management, scheduling, and pooling, achieving unified management and pooled execution of heterogeneous computing power resources.

Documentation & Training

N/A

Release Planning & Release Notes

For release of year 2025,

1.Update and documentation of corss-arch adaptive runtime API

Previous Releases

Release 2024 contents:

1.Implementation on corss-arch adaptive runtime API

2.Runtime adaptation of Nvidia backend

Release 2023 contents:

1.Heterogeneous-Accelerator Migration Technology Solution: Introduction of Heterogeneous-Accelerator Migration Technology solution, including user guide, architecture description, etc.

2.Key component image of Heterogeneous-Accelerator Migration Platform: The model calculation graph and program code for user input will be compiled and run across architectures, and key tool components will be provided.

Description
Computing native is to build a open and unified compiling platform that eliminate the difference between heterogeneous hardwares like GPU, FPGA, etc.
Readme 80 KiB
Languages
C++ 64.2%
C 35.8%