プロジェクト情報
Zero-Bubble Pipeline Parallelism
Pipeline-parallelism technique for improving training efficiency
- 機関
- Sea AI Lab (SAIL)
- グループ
- International corporate lab
- カテゴリー
- Training-efficiency optimization
- ステータス
- Research open source
- ローンチ
- 2023-11
- 言語 / 形態
- Python
- ライセンス
- Not specified
- GitHub Stars
- 452
- 情報更新
- 2026-05-04
Zero-Bubble Pipeline Parallelism is Sea AI Lab’s systems work on large-model training efficiency, aiming to reduce idle time in pipeline parallelism.
説明
Pipeline parallelism splits a model into stages and processes micro-batches across devices. The problem is that devices often wait for each other, creating "bubbles."
Zero-Bubble improves scheduling and backward-pass arrangement so devices spend less time idle and large-model training throughput rises.
AIとの関係
Training efficiency is a hidden battleground in foundation-model competition. Less idle time means the same compute can train more tokens, larger models, or more iterations.
This kind of systems paper and open implementation has practical value for large-model labs.
シンガポールとの関係
Sea AI Lab’s work on training systems shows that a Singapore homegrown corporate lab is entering lower-level model infrastructure, not only applications.
Together with Colossal-AI, it forms the "training systems" line in Singapore’s open-source ecosystem.
重要マイルストーン
- 2023-11Zero-Bubble repository created