産学研オープンソースエコシステムに戻る Unified multimodal model Active open source

プロジェクト情報

BAGEL

Open-source unified multimodal model for understanding and generation

GitHub stars
5.8k+
Direction
unified multimodal
Form
model repo
機関
ByteDance Seed (Singapore)
グループ
International corporate lab
カテゴリー
Unified multimodal model
ステータス
Active open source
ローンチ
2025-04
言語 / 形態
Python
ライセンス
Apache-2.0
GitHub Stars
5,886
情報更新
2026-05-04

BAGEL is a unified multimodal model project from ByteDance Seed, showing the link between Singapore-based teams and global open multimodal competition.

説明

BAGEL is an open unified multimodal model that aims to place understanding and generation inside one model framework. It sits in the shift from point capabilities toward general vision-language systems.

The public repository provides code and model entry points for researchers and developers.

AIとの関係

Unified multimodal models are one of the main competitive lines in 2025-2026. BAGEL focuses on how one model can handle visual understanding, text, and generation rather than splitting tasks across separate models.

It forms an interesting contrast with university lines such as Show-o and NExT-GPT: corporate labs tend to emphasize productization and rapid iteration.

シンガポールとの関係

ByteDance Seed’s presence in Singapore connects the local ecosystem to Chinese and global model-research networks. Projects such as BAGEL show that Singapore is not only a regional HQ location, but can be part of frontier-model teams.

The points to track: Singapore teams’ actual role in training, data, evaluation, or product; and whether these open projects attract local developers.

重要マイルストーン

  1. 2025-04
    BAGEL repository created

リソース入口

その他の産学研プロジェクト