Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11889/6121
DC FieldValueLanguage
dc.contributor.authorJubran, Mohammad K.en_US
dc.contributor.authorAlhabib, Abbasen_US
dc.contributor.authorChadha, Aaronen_US
dc.contributor.authorAndreopoulos, Yiannisen_US
dc.date.accessioned2020-01-15T07:08:23Z-
dc.date.available2020-01-15T07:08:23Z-
dc.date.issued2018-10-01-
dc.identifier.issn10518215-
dc.identifier.urihttp://hdl.handle.net/20.500.11889/6121-
dc.description.abstract—Advanced video classification systems decode video frames to derive the necessary texture and motion representations for ingestion and analysis by spatio-temporal deep convolutional neural networks (CNNs). However, when considering visual Internet-of-Things applications, surveillance systems and semantic crawlers of large video repositories, the video capture and the CNN-based semantic analysis parts do not tend to be colocated. This necessitates the transport of compressed video over networks and incurs significant overhead in bandwidth and energy consumption, thereby significantly undermining the deployment potential of such systems. In this paper, we investigate the trade-off between the encoding bitrate and the achievable accuracy of CNN-based video classification models that directly ingest AVC/H.264 and HEVC encoded videos. Instead of retaining entire compressed video bitstreams and applying complex optical flow calculations prior to CNN processing, we only retain motion vector and select texture information at significantly-reduced bitrates and apply no additional processing prior to CNN ingestion. Based on three CNN architectures and two action recognition datasets, we achieve 11%–94% saving in bitrate with marginal effect on classification accuracy. A model-based selection between multiple CNNs increases these savings further, to the point where, if up to 7% loss of accuracy can be tolerated, video classification can take place with as little as 3 kbps for the transport of the required compressed video information to the system implementing the CNN modelsen_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technologyen_US
dc.subjectImaging systems - Classificationen_US
dc.subjectVideo classificationen_US
dc.subjectConvolutional neural networksen_US
dc.subjectStreaming technology (Telecommunications)en_US
dc.subjectStreaming videoen_US
dc.titleRate-accuracy trade-off In video classification with deep convolutional neural networksen_US
dc.typeArticleen_US
newfileds.departmentEngineering and Technologyen_US
newfileds.item-access-typebzuen_US
newfileds.thesis-prognoneen_US
newfileds.general-subjectnoneen_US
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doi10.1109/TCSVT.2018.2887408-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.doihttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
dc.identifier.scopus2-s2.0-85058871158-
dc.identifier.urlhttps://api.elsevier.com/content/abstract/scopus_id/85058871158-
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:6. BZU Dataset Collection
6. BZU Dataset Collection
Files in This Item:
File Description SizeFormat
2018 CSVT.pdf1.94 MBAdobe PDFView/Open
Show simple item record

Page view(s)

192
checked on Apr 14, 2024

Download(s)

90
checked on Apr 14, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.