INTRODUCTION

We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.


HIGHLIGHTS


CITATION

If you find this dataset useful, please cite the following paper arXiv

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
M. Barekatain, M. Martí, H. Shih, S. Murray, K. Nakayama, Y. Matsuo, and H. Prendinger
arXiv:1706.03038, 2017
[PDF]


ANNOTATION EXAMPLES (downscaled to 720p)


ACTION CATEGORIES

arXiv


DATASET DOWNLOAD

Video Names: each video name consists of 3 integers separated by dots. The definition of these integers from left to right are:

  1. Drone number. Each scenario, with the exception of one, was captured using 2 drones (of different configuration) at the same time.
  2. Part of the day. “1” indicates morning and “2” indicates noon.
  3. Scenario number.

Hence, the pair of videos with the same last two integers are the same scenario with different drones configuration.

Labels: Each line contains 10+ columns, separated by spaces. The definition of these columns are:

  1. Track ID. All rows with the same ID belong to the same person for 180 frames. Then the person gets a new idea for the next 180 frames. We will soon release an update to make the IDs consistant.
  2. xmin. The top left x-coordinate of the bounding box.
  3. ymin. The top left y-coordinate of the bounding box.
  4. xmax. The bottom right x-coordinate of the bounding box.
  5. ymax. The bottom right y-coordinate of the bounding box.
  6. frame. The frame that this annotation represents.
  7. lost. If 1, the annotation is outside of the view screen.
  8. occluded. If 1, the annotation is occluded.
  9. generated. If 1, the annotation was automatically interpolated.
  10. label. The label for this annotation, enclosed in quotation marks. This field is always “Person”.
  11. (+) actions. Each column after this is an action.

There are two label files for each video; one for single-action detection and one for multi-action detection. Note that labels for single-action detection has been created from the multi-action detection labels (for more details please refer to our publication). For pedestrian detection task, the columns describing the actions should be ignored.

Training set (videos & labels) link.


MODELS DOWNLOAD

Final trained Caffe models link.


UPDATES


DEVELOPERS TEAM

The creation of this dataset was supported by Prendinger Lab at the National Institute of Informatics, Tokyo, Japan. We are also grateful for financial support and the provision of GPU power from Matsuo Lab at the University of Tokyo.

Mohammadamin Barekatain

Technical University of Munich

Read More
Mohammadamin Barekatain
Miquel Marti

KTH Royal Institute of Technology

Read More
Miquel Marti
Hsueh-Fu Shih

National Taiwan University

Read More
Hsueh-Fu Shih
Samuel Murray

KTH Royal Institute of Technology

Read More
Samuel Murray
Helmut Prendinger

National Institute of Informatics

Read More
Helmut Prendinger

LICENSE

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Contact : m.barekatain at tum dot de
Last update : 31/07/2017