Community Project Profile
ShowUI
A vision-language-action model for GUI agents and computer use
- Organisation
- NUS Show Lab
- Group
- University / research
- Category
- GUI agent model
- Status
- Active research line
- Started
- 2024-10
- Language / Form
- Python / Models
- License
- Apache-2.0
- GitHub Stars
- 1,822
- Updated
- 2026-05-04
ShowUI is an open model for GUI agents, letting a model understand interfaces from screenshots and output clickable coordinates or actions.
What It Is
ShowUI focuses on the software interfaces people actually use every day: webpages, app windows, buttons, input boxes, and menus. It lets a model locate action targets from visual interfaces, serving computer use and GUI automation.
This differs from pure text agents: many real applications lack clean APIs, complete DOMs, or accessibility trees. ShowUI tries to understand actions directly from the screen.
AI Relevance
One bottleneck for agent deployment is interface operation. Having a model write a plan is not the hard part; clicking the right place, understanding state changes, and recovering from failure inside complex software is harder.
ShowUI turns GUI visual understanding into a model task, making it a key path from chat agents toward real computer operation.
Singapore Relevance
ShowUI matters for Singapore because it cuts into enterprise automation and agent tooling. Much AI deployment in Singapore happens inside finance, government, healthcare, and logistics systems, where many workflows still pass through legacy interfaces.
If GUI agents become a general capability, work such as ShowUI becomes a base module connecting models to real software workflows.
Milestones
- 2024-10ShowUI repository created
- 2025-02ShowUI accepted to CVPR 2025