AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents

Yuxiang Chai*1, Siyuan Huang*23, Yazhe Niu13, Han Xiao1, Liang Liu4,
Dingyu Zhang1, Peng Gao3, Shuai Ren4, Hongsheng Li✉1

1 MMLab, CUHK    2 SJTU
3 Shanghai AI Lab    4 vivo AI Lab

*Indicates Equal Contribution
Corresponding Author

Abstract

AI agents have drawn increasing attention mostly on their ability to perceive environments, understand tasks, and autonomously achieve goals. To advance research on AI agents in mobile scenarios, we introduce the Android Multi-annotation EXpo (AMEX), a comprehensive, large-scale dataset designed for generalist mobile GUI-control agents. Their capabilities of completing complex tasks by directly interacting with the graphical user interface (GUI) on mobile devices are trained and evaluated with the proposed dataset. AMEX comprises over 104K high-resolution screenshots from 110 popular mobile applications, which are annotated at multiple levels. Unlike existing mobile device-control datasets, e.g., MoTIF, AITW, etc., AMEX includes three levels of annotations: GUI interactive element grounding, GUI screen and element functionality descriptions, and complex natural language instructions, each averaging 13 steps with stepwise GUI-action chains. We develop this dataset from a more instructive and detailed perspective, complementing the general settings of existing datasets. Additionally, we develop a baseline model SPHINX Agent and compare its performance across state-of-the-art agents trained on other datasets. To facilitate further research, we open-source our dataset, models, and relevant evaluation tools.