For many years, multi-object tracking benchmarks have focused on a handful of
categories. Motivated primarily by surveillance and self-driving applications,
these datasets provide tracks for people, vehicles, and animals, ignoring the
vast majority of objects in the world. By contrast, in the related field of
object detection, the introduction of large-scale, diverse datasets (e.g.,
COCO) have fostered significant progress in developing highly robust solutions.
To bridge this gap, we introduce a similarly diverse dataset for Tracking Any
Object (TAO). It consists of 2,907 high resolution videos, captured in diverse
environments, which are half a minute long on average. Importantly, we adopt a
bottom-up approach for discovering a large vocabulary of 833 categories, an
order of magnitude more than prior tracking benchmarks. To this end, we ask
annotators to label objects that move at any point in the video, and give names
to them post factum. Our vocabulary is both significantly larger and
qualitatively different from existing tracking datasets. To ensure scalability
of annotation, we employ a federated approach that focuses manual effort on
labeling tracks for those relevant objects in a video (e.g., those that move).
We perform an extensive evaluation of state-of-the-art trackers and make a
number of important discoveries regarding large-vocabulary tracking in an
open-world. In particular, we show that existing single- and multi-object
trackers struggle when applied to this scenario in the wild, and that
detection-based, multi-object trackers are in fact competitive with
user-initialized ones. We hope that our dataset and analysis will boost further
progress in the tracking community.