B-MoCA: Benchmarking Mobile Device Control Agents across Diverse Configurations

1KAIST, 2Seoul National University, 3Yonsei University

B-MoCA can serve as a testbed for mobile device control agents across diverse device configurations

[Release Notes]

(ver.2.4) We added 39 more daily tasks (total 138 tasks, as removing a task). We show the success rates of representative agents (GPT-4o and Gemini-1.5-pro) on 138 tasks and the success rates of all baseline agents (GPT-4o, Gemini-1.5-pro, Llama-3, and custom agents) on six challenging representative tasks below. The paper (ver.2) will be updated soon.
(ver.2) We added 30 more daily tasks (total 100 tasks). We updated the observation parsing function (link), which improves the performance of LLM agents by a large margin.
(ver.1) First public release! We presented 70 daily tasks on interactive real Android emulator environments with a randomization feature over the device settings. We conducted an analysis using GPT-4/GPT-4V and Gemini-1.0-pro/Gemini-1.0-pro-V. The paper (ver.1; link) and code (ver.1; link) are available.

Interpolate start reference image.

"Turn on the airplane mode"

Interpolate start reference image.

"Create an alarm at 10:30 am"

Interpolate start reference image.

"Decrease the screen brightness"

Interpolate start reference image.

"Call 911"

[Exemplary daily tasks performed by the agents]

Abstract

Developing autonomous agents for mobile devices can significantly enhance user interactions by offering increased efficiency and accessibility. However, despite the growing interest in mobile device control agents, the absence of a commonly adopted benchmark makes it challenging to quantify scientific progress in this area. In this work, we introduce B-MoCA: a novel benchmark designed specifically for evaluating mobile device control agents. To create a realistic benchmark, we develop B-MoCA based on the Android operating system and define 60 common daily tasks. Importantly, we incorporate a randomization feature that changes various aspects of mobile devices, including user interface layouts and language settings, to assess generalization performance. We benchmark diverse agents, including agents employing large language models (LLMs) or multi-modal LLMs as well as agents trained from scratch using human expert demonstrations. While these agents demonstrate proficiency in executing straightforward tasks, their poor performance on complex tasks highlights significant opportunities for future research to enhance their effectiveness.

MY ALT TEXT

Numerous Daily Tasks


B-MoCA presents 138 daily tasks that are grounded in realistic situations. The tasks span 10+ applications, including the thrid-party app (e.g., Instagram). These require the agent fundamental skills for controlling mobile devices, including navigation over the multiple pages and manipulating various UI elements. The below shows the statistics of the tasks, distribution over the app category and the episode length.

MY ALT TEXT

Diverse Device Environments


In B-MoCA, users can configure environments with diverse device configurations. We generate 45 environments (35 for training environments and 10 for test environments) with varying device types, languages, font sizes, wallpapers, and icon locations.

Experimental Results


First, we show the proficiency of agents using state-of-the-art closed-source LLMs (i.e., GPT-4o and Gemini-1.5-pro in zero-shot) on 138 tasks. The agents are evaluated on three test environments we configure: a vanilla environment with all target applications in the home screen (named, "Plain"), an environment with icons in the home screen and size setting randomized (named, "Random 1"), and an environment with icons in the home screen, size, wallpaper, and language setting randomized (named, "Random 2"). The different success rates among environments reveal the unique challenge of B-MoCA in assessing the generalization ability of mobile device control agents.
MY ALT TEXT
Moreover, we benchmark state-of-the-art closed-source/open-source LLMs (GPT-4o, Gemini-1.5-pro, and Llama3 in few-shot) and MLLMs (GPT-4o, Gemini-1.5-pro in few-shot) as well as custom agents trained from scratch on six challenging representative tasks. The agents are evaluated on all 10 test environments we configure. The lack of proficiencies of these agents in complex scenarios calls for future work. For more details, please refer to the paper (which will be updated soon).
MY ALT TEXT

BibTeX

@inproceedings{lee2024benchmarking,
      title={Benchmarking Mobile Device Control Agents Across Diverse Configurations}, 
      author={Juyong Lee and Taywon Min and Minyong An and Dongyoon Hahm and Haeone Lee and Changyeon Kim and Kimin Lee},
      year={2024},
      booktitle={ICLR 2024 Workshop on Generative Models for Decision Making},
}