AR-Net: Adaptive Frame Resolution for
Efficient Action Recognition
Yue Meng
Chung-Ching Lin
Rameswar Panda
Prasanna Sattigeri
Leonid Karlinsky
Aude Oliva
Kate Saenko
Rogerio Feris
MIT-IBM Watson AI Lab, IBM Research
Boston University
Massachusetts Institute of Technology
ECCV 2020

Abstract

Action recognition is an open and challenging problem in computer vision. While current state-of-the-art models offer excellent recognition results, their computational expense limits their impact for many real-world applications. In this paper, we propose a novel approach, called AR-Net (Adaptive Resolution Network), that selects on-the-fly the optimal resolution for each frame conditioned on the input for efficient action recognition in long untrimmed videos. Specifically, given a video frame, a policy network is used to decide what input resolution should be used for processing by the action recognition model, with the goal of improving both accuracy and efficiency. We efficiently train the policy network jointly with the recognition model using standard back-propagation. Extensive experiments on several challenging action recognition benchmark datasets well demonstrate the efficacy of our proposed approach over state-of-the-art methods.

Network




Action recognition results on ActivityNet-v1.3 and FCVID




Accuracy versus efficiency comparison




Paper and Code

Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, and Rogerio Feris.
ARNet: Adaptive Frame Resolution for Efficient Action Recognition
European Conference on Computer Vision (ECCV), 2020
[PDF][Code]