B-MoCA: Benchmarking Mobile Device Control Agents across Diverse Configurations

1KAIST, 2Seoul National University, 3Yonsei University

B-MoCA can serve as a testbed for mobile device control agents across diverse device configurations

[Release Notes]

(ver.2.5) The paper (ver.2) is updated. We removed 7 tasks (total 131 tasks), due to high stochasticity during evaluation. We show the success rates of representative agents (GPT-4o and Gemini-1.5-pro) on 131 tasks and the success rates of all baseline agents (including custom agents) on six challenging representative tasks.
(ver.2.4) We added 39 more daily tasks (total 138 tasks, as removing a task).
(ver.2) We added 30 more daily tasks (total 100 tasks). We updated the observation parsing function (link), which improves the performance of LLM agents by a large margin.
(ver.1) First public release! We presented 70 daily tasks on interactive real Android emulator environments with a randomization feature over the device settings. We conducted an analysis using GPT-4/GPT-4V and Gemini-1.0-pro/Gemini-1.0-pro-V. The paper (ver.1; link) and code (ver.1; link) are available.

Interpolate start reference image.

"Turn on the airplane mode"

Interpolate start reference image.

"Create an alarm at 10:30 am"

Interpolate start reference image.

"Decrease the screen brightness"

Interpolate start reference image.

"Call 911"

[Exemplary daily tasks performed by the agents]

Abstract

Mobile device control agents can largely enhance user interactions and productivity by automating daily tasks. However, despite growing interest in developing practical agents, the absence of a commonly adopted benchmark in this area makes it challenging to quantify scientific progress. In this work, we introduce B-MoCA: a novel benchmark with interactive environments for evaluating and developing mobile device control agents. To create a realistic benchmark, we develop B-MoCA based on the Android operating system and define 131 common daily tasks. Importantly, we incorporate a randomization feature that changes the configurations of mobile devices, including user interface layouts and language settings, to assess generalization performance. We benchmark diverse agents, including agents employing large language models (LLMs) or multi-modal LLMs as well as agents trained with imitation learning using human expert demonstrations. While these agents demonstrate proficiency in executing straightforward tasks, their poor performance on complex tasks highlights significant opportunities for future research to improve effectiveness.

MY ALT TEXT

Numerous Daily Tasks


B-MoCA presents 131 daily tasks that are grounded in realistic situations. The tasks span 10+ applications, including the thrid-party app (e.g., Instagram). These require the agent fundamental skills for controlling mobile devices, including navigation over the multiple pages and manipulating various UI elements. The below shows the statistics of the tasks, distribution over the app category and the episode length.

MY ALT TEXT

Diverse Device Environments


In B-MoCA, users can configure environments with diverse device configurations. We generate 45 environments (35 for training environments and 10 for test environments) with varying device types, languages, font sizes, wallpapers, and icon locations.

Experimental Results


First, we show the proficiency of agents using state-of-the-art closed-source LLMs (i.e., GPT-4o and Gemini-1.5-pro in zero-shot) on 131 tasks. The agents are evaluated on three test environments we configure: a vanilla environment with all target applications in the home screen (Test Env 100), an environment with icons in the home screen and size setting randomized (Test Env 101), and an environment with icons in the home screen, size, wallpaper, and language setting randomized (Test Env 105). The different success rates among environments reveal the unique challenge of B-MoCA in assessing the generalization ability of mobile device control agents.
MY ALT TEXT
Moreover, we benchmark state-of-the-art closed-source/open-source LLMs (GPT-4o, Gemini-1.5-pro, and Llama3 in few-shot) and MLLMs (GPT-4o, Gemini-1.5-pro in few-shot) as well as custom agents using fine-tuned LLMs (Llama-3) and VLM encoder on six challenging representative tasks. The agents are evaluated on all 10 test environments we configure. The lack of proficiencies of these agents in complex scenarios calls for future work.
MY ALT TEXT

BibTeX

@article{lee2024benchmarking,
  title={Benchmarking Mobile Device Control Agents Across Diverse Configurations}, 
  author={Lee, Juyong and Min, Taywon and An, Minyong and Hahm, Dongyoon and Lee, Haeone and Kim, Changyeon and Lee, Kimin},
  journal={arXiv preprint arXiv:2404.16660},
  year={2024},
}