Show simple item record

dc.contributor.authorAbbas, Yawar
dc.contributor.authorAl Mudawi, Naif
dc.contributor.authorAlabdullah, Bayan
dc.contributor.authorSadiq, Touseef
dc.contributor.authorAlgarni, Asaad
dc.contributor.authorRahman, Hameedur
dc.contributor.authorJalal, Ahmad
dc.date.accessioned2025-03-31T08:47:16Z
dc.date.available2025-03-31T08:47:16Z
dc.date.created2025-01-14T10:45:30Z
dc.date.issued2024
dc.identifier.citationAbbas, Y., Al Mudawi, N., Alabdullah, B., Sadiq, T., Algarni, A., Rahman, H., & Jalal, A. (2024). Unmanned aerial vehicles for human detection and recognition using neural-network model. Frontiers in Neurorobotics, 18, 1443678.en_US
dc.identifier.issn1662-5218
dc.identifier.urihttps://hdl.handle.net/11250/3185667
dc.description.abstractIntroduction: Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks. Methods: In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background. Result: This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action. Discussion: On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.en_US
dc.language.isoengen_US
dc.publisherFrontiers Media S.A.en_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleUnmanned aerial vehicles for human detection and recognition using neural-network modelen_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2024 The Author(s)en_US
dc.subject.nsiVDP::Technology: 500::Information and communication technology: 550en_US
dc.source.volume18en_US
dc.source.journalFrontiers in Neuroroboticsen_US
dc.identifier.doihttps://doi.org/10.3389/fnbot.2024.1443678
dc.identifier.cristin2340571
dc.source.articlenumber1443678en_US
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal