産学研オープンソースエコシステムに戻る Training-efficiency optimization Research open source

プロジェクト情報

Zero-Bubble Pipeline Parallelism

Pipeline-parallelism technique for improving training efficiency

GitHub stars
452
Direction
pipeline parallelism
Goal
reduce pipeline bubbles
機関
Sea AI Lab (SAIL)
グループ
International corporate lab
カテゴリー
Training-efficiency optimization
ステータス
Research open source
ローンチ
2023-11
言語 / 形態
Python
ライセンス
Not specified
GitHub Stars
452
情報更新
2026-05-04

Zero-Bubble Pipeline Parallelism is Sea AI Lab’s systems work on large-model training efficiency, aiming to reduce idle time in pipeline parallelism.

説明

Pipeline parallelism splits a model into stages and processes micro-batches across devices. The problem is that devices often wait for each other, creating "bubbles."

Zero-Bubble improves scheduling and backward-pass arrangement so devices spend less time idle and large-model training throughput rises.

AIとの関係

Training efficiency is a hidden battleground in foundation-model competition. Less idle time means the same compute can train more tokens, larger models, or more iterations.

This kind of systems paper and open implementation has practical value for large-model labs.

シンガポールとの関係

Sea AI Lab’s work on training systems shows that a Singapore homegrown corporate lab is entering lower-level model infrastructure, not only applications.

Together with Colossal-AI, it forms the "training systems" line in Singapore’s open-source ecosystem.

重要マイルストーン

  1. 2023-11
    Zero-Bubble repository created

リソース入口

その他の産学研プロジェクト