Back to Community Open Source Training-efficiency optimization Research open source

Community Project Profile

Zero-Bubble Pipeline Parallelism

Pipeline-parallelism technique for improving training efficiency

GitHub stars
452
Direction
pipeline parallelism
Goal
reduce pipeline bubbles
Organisation
Sea AI Lab (SAIL)
Group
International corporate lab
Category
Training-efficiency optimization
Status
Research open source
Started
2023-11
Language / Form
Python
License
Not specified
GitHub Stars
452
Updated
2026-05-04

Zero-Bubble Pipeline Parallelism is Sea AI Lab’s systems work on large-model training efficiency, aiming to reduce idle time in pipeline parallelism.

What It Is

Pipeline parallelism splits a model into stages and processes micro-batches across devices. The problem is that devices often wait for each other, creating "bubbles."

Zero-Bubble improves scheduling and backward-pass arrangement so devices spend less time idle and large-model training throughput rises.

AI Relevance

Training efficiency is a hidden battleground in foundation-model competition. Less idle time means the same compute can train more tokens, larger models, or more iterations.

This kind of systems paper and open implementation has practical value for large-model labs.

Singapore Relevance

Sea AI Lab’s work on training systems shows that a Singapore homegrown corporate lab is entering lower-level model infrastructure, not only applications.

Together with Colossal-AI, it forms the "training systems" line in Singapore’s open-source ecosystem.

Milestones

  1. 2023-11
    Zero-Bubble repository created

Resources

More Community Projects