Video object detection via space–time feature aggregation and result reuse
Abstract When detecting the objects in videos, motion always leads to object deterioration, like blurring and occlusion, as well as the strange state of the object's shape and posture. Consequently, the detection of video frames will lead to a decline in accuracy by using the image object detec...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2024-10-01
|
| Series: | IET Image Processing |
| Subjects: | |
| Online Access: | https://doi.org/10.1049/ipr2.13179 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract When detecting the objects in videos, motion always leads to object deterioration, like blurring and occlusion, as well as the strange state of the object's shape and posture. Consequently, the detection of video frames will lead to a decline in accuracy by using the image object detection model. This paper proposes an online video object detection method based on the one‐stage detector YOLOx. First, the module for space–time feature aggregation is given, which uses the space–time information of past frames to enhance the feature quality of the current frame. Then, the module for result reuse is given, which incorporates the detection results of past frames to improve the detection stability of the current frame. By these two modules, the trade‐off between accuracy and speed of video object detection could be achieved. Experimental results on the ImageNet VID show the improvement of speed and accuracy of the proposed method. |
|---|---|
| ISSN: | 1751-9659 1751-9667 |