Show simple item record

dc.contributor.authorDhilleswararao, Pudi
dc.contributor.authorBoppu, Srinivas
dc.contributor.authorManikandan, M. Sabarimalai
dc.contributor.authorCenkeramaddi, Linga Reddy
dc.date.accessioned2023-01-03T13:45:51Z
dc.date.available2023-01-03T13:45:51Z
dc.date.created2022-12-16T09:22:00Z
dc.date.issued2022
dc.identifier.citationDhilleswararao, P, Boppu, S, Manikandan, M. S. & Cenkeramaddi, L. R. (2022). Efficient Hardware Architectures for Accelerating Deep Neural Networks: Survey. IEEE Access, 10, 131788 - 131828.en_US
dc.identifier.issn2169-3536
dc.identifier.urihttps://hdl.handle.net/11250/3040701
dc.description.abstractIn the modern-day era of technology, a paradigm shift has been witnessed in the areas involving applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). Specifically, Deep Neural Networks (DNNs) have emerged as a popular field of interest in most AI applications such as computer vision, image and video processing, robotics, etc. In the context of developed digital technologies and the availability of authentic data and data handling infrastructure, DNNs have been a credible choice for solving more complex real-life problems. The performance and accuracy of a DNN is a way better than human intelligence in certain situations. However, it is noteworthy that the DNN is computationally too cumbersome in terms of the resources and time to handle these computations. Furthermore, general-purpose architectures like CPUs have issues in handling such computationally intensive algorithms. Therefore, a lot of interest and efforts have been invested by the research fraternity in specialized hardware architectures such as Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), and Coarse Grained Reconfigurable Array (CGRA) in the context of effective implementation of computationally intensive algorithms. This paper brings forward the various research works carried out on the development and deployment of DNNs using the aforementioned specialized hardware architectures and embedded AI accelerators. The review discusses the detailed description of the specialized hardware-based accelerators used in the training and/or inference of DNN. A comparative study based on factors like power, area, and throughput, is also made on the various accelerators discussed. Finally, future research and development directions are discussed, such as future trends in DNN implementation on specialized hardware accelerators. This review article is intended to serve as a guide for hardware architectures for accelerating and improving the effectiveness of deep learning research.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleEfficient Hardware Architectures for Accelerating Deep Neural Networks: Surveyen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2022 The Author(s)en_US
dc.subject.nsiVDP::Teknologi: 500en_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.source.pagenumber131788 - 131828en_US
dc.source.volume10en_US
dc.source.journalIEEE Accessen_US
dc.identifier.doi10.1109/ACCESS.2022.3229767
dc.identifier.cristin2094131
dc.relation.projectNorges forskningsråd: 287918en_US
dc.relation.projectThe Seed Grant of IIT Bhubaneswar (TAML: Timing Analysis with Machine Learning): SP088.en_US
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal