Lightweight Optimization of YOLO Models for Resource-Constrained Devices: A Comprehensive Review

Authors

  • Ula Kh. Altaie Department of Computer Networks Engineering, Al-Nahrain University, College of Information Engineering, Baghdad, Iraq
  • A.E. Abdelkareem Department of Computer Networks Engineering, Al-Nahrain University, College of Information Engineering, Baghdad, Iraq
  • Abdullah Alhasanat Alhussein Bin Talal University, College of Engineering, Jordan

DOI:

https://doi.org/10.24237/djes.2025.18401

Keywords:

YOLO, Edge Computing, IoT, Lightweight Modules, Reduction Techniques

Abstract

The growing adoption of Internet of Things (IoT) and edge computing has increased the need for effective real-time object detectors that can operate in resource-constrained environments. YOLO (You Only Look Once), a leading state-of-the-art object detection algorithm, is well known for its remarkable real-time performance in a variety of applications. However, traditional YOLO models remain computationally heavy for resource-constrained environments since they are designed for high-performance systems, making them less practical for low-power, embedded platforms such as Raspberry Pi, ARM-based processors, and NVIDIA Jetson edge devices. This paper aims to investigate and analyse optimization strategies that enhance YOLO’s efficiency for edge deployments and provides a comprehensive review of various optimization techniques to overcome the deployment challenges. Two main approaches are explored, structural modification using lightweight modules like ShuffleNet, MobileNet, and GhostNet, and model compression via knowledge distillation, quantization, and pruning. The reviewed works demonstrate significant reductions in model size and complexity, with generally enhanced inference speed and improving accuracy, however, in certain cases, a slight drop in accuracy and frame rate occurs as the cost of achieving higher efficiency. Structural modifications generally support model stability, efficiency, and generalization, while compression-based techniques further improve models’ compactness and inference throughput. A combined or hybrid optimization strategy offers the most balanced solution, achieving strong detection accuracy with reductions in model size, GFLOPs, and overall inference cost. This narrative synthesis review provides guidance for developing scalable, energy-aware YOLO models suitable for edge-based detection applications in fields like autonomous vehicles, smart cities, and IoT-driven systems.

Downloads

Download data is not yet available.

References

[1] H. Feng, G. Mu, S. Zhong, P. Zhang, and T. Yuan, “Benchmark Analysis of YOLO Performance on Edge Intelligence Devices,” Cryptography, vol. 6, no. 2, p. 16, Apr. 2022, doi: 10.3390/cryptography6020016.

[2] S. A. Mostafa et al., “A YOLO-based deep learning model for Real-Time face mask detection via drone surveillance in public spaces,” Information Sciences, vol. 676, p. 120865, Aug. 2024, doi: 10.1016/j.ins.2024.120865.

[3] S. Kang, Z. Hu, L. Liu, K. Zhang, and Z. Cao, “Object Detection YOLO Algorithms and Their Industrial Applications: Overview and Comparative Analysis,” Electronics, vol. 14, no. 6, p. 1104, Mar. 2025, doi: 10.3390/electronics14061104.

[4] G. Xue, S. Li, P. Hou, S. Gao, and R. Tan, “Research on lightweight Yolo coal gangue detection algorithm based on resnet18 backbone feature network,” Internet of Things, vol. 22, p. 100762, Jul. 2023, doi: 10.1016/j.iot.2023.100762.

[5] Z. Lyu, T. Yu, F. Pan, Y. Zhang, J. Luo, D. Zhang, Y. Chen, B. Zhang, and G. Li, “A survey of model compression strategies for object detection,” Multimedia Tools and Applications, vol. 83, no. 16, pp. 48165–48236, 2023, doi: 10.1007/s11042-023-17192-x.

[6] M. H. Mir, J. A. Kovilpillai J, S. S. Mohamed, Pragya, T. A. Mir, and B. Paul, “Deep learning based Crop Monitoring for effective Agricultural-IoT Management,” Procedia Computer Science, vol. 258, pp. 332–341, 2025, doi: 10.1016/j.procs.2025.04.270.

[7] P. Mittal, “A comprehensive survey of deep learning-based lightweight object detection models for edge devices,” Artificial Intelligence Review, vol. 57, no. 9, p. 242, Aug. 2024, doi: 10.1007/s10462-024-10877-1.

[8] M. A. Burhanuddin, “Efficient Hardware Acceleration Techniques for Deep Learning on Edge Devices: A Comprehensive Performance Analysis,” KHWARIZMIA, vol. 2023, pp. 103–112, Aug. 2023, doi: 10.70470/KHWARIZMIA/2023/010.

[9] H. Xue, B. Huang, M. Qin, H. Zhou, and H. Yang, “Edge Computing for Internet of Things: A Survey,” in 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), IEEE, Nov. 2020, pp. 755–760. doi: 10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics50389.2020.00130.

[10] M. Laroui, B. Nour, H. Moungla, M. A. Cherif, H. Afifi, and M. Guizani, “Edge and fog computing for IoT: A survey on current research activities & future directions,” Computer Communications, vol. 180, pp. 210–231, Dec. 2021, doi: 10.1016/j.comcom.2021.09.003.

[11] X. Kong, Y. Wu, H. Wang, and F. Xia, “Edge Computing for Internet of Everything: A Survey,” IEEE Internet Things Journal, vol. 9, no. 23, pp. 23472–23485, Dec. 2022, doi: 10.1109/JIOT.2022.3200431.

[12] E. Fazeldehkordi and T.M. Grønli, “A Survey of Security Architectures for Edge Computing-Based IoT,” IoT, vol. 3, no. 3, pp. 332–365, Jun. 2022, doi: 10.3390/iot3030019.

[13] S. Iftikhar et al., “AI-based fog and edge computing: A systematic review, taxonomy and future directions,” Internet of Things, vol. 21, p. 100674, Apr. 2023, doi: 10.1016/j.iot.2022.100674.

[14] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 779-788, doi: 10.1109/CVPR.2016.91.

[15] J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 6517-6525, doi: 10.1109/CVPR.2017.690.

[16] A. Farhadi and J. Redmon, “Yolov3: An incremental improvement,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; Volume 1804, pp. 1–6. in Computer vision and pattern recognition, 2018, pp. 1–6.

[17] A. Bochkovskiy, C.Y. Wang, and H.Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint, arXiv:2004.10934, 2020.

[18] G. Jocher, “Ultralytics YOLOv5,” 2020, doi: 10.5281/zenodo.3908559.

[19] C. Li et al., “YOLOv6: A single-stage object detection framework for industrial applications,” arXiv preprint, arXiv:2209.02976, 2022.

[20] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 2023, pp. 7464-7475, doi: 10.1109/CVPR52729.2023.00721.

[21] G. Jocher, “Ultralytics YOLOv8,” 2023. [Online]. Available: https://github.com/ultralytics/ultralytics

[22] C.-Y. Wang, I.-H. Yeh, and H.-Y. M. Liao, “Yolov9: Learning what you want to learn using programmable gradient information,” arXiv preprint, arXiv:2402.13616, 2024.

[23] A. Wang et al., “Yolov10: Real-time end-to-end object detection,” 2024, arXiv preprint, arXiv:2405.14458, 2024.

[24] G. Jocher and contributors, “YOLOv11 by Ultralytics,” 2023. doi: 10.5281/zenodo.1234567.

[25] Y. Tian, Q. Ye, and D. Doermann, “YOLOv12: Attention-Centric Real-Time Object Detectors,” arXiv preprint, arXiv:2502.12524, 2025.

[26] I. S. Gillani et al., “Yolov5, Yolo-x, Yolo-r, Yolov7 Performance Comparison: A Survey,” in Artificial Intelligence and Fuzzy Logic System, Sep. 2022, pp. 17–28. doi:10.5121/csit.2022.121602.

[27] M. Hussain, “YOLOv1 to v8: Unveiling Each Variant–A Comprehensive Review of YOLO,” IEEE Access, vol. 12, pp. 42816–42833, 2024, doi: 10.1109/ACCESS.2024.3378568.

[28] P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, “A Review of Yolo Algorithm Developments,” Procedia Computer Science, vol. 199, pp. 1066–1073, 2022, doi: 10.1016/j.procs.2022.01.135.

[29] M. G. Ragab et al., “A Comprehensive Systematic Review of YOLO for Medical Object Detection (2018 to 2023),” IEEE Access, vol. 12, pp. 57815–57836, 2024, doi: 10.1109/ACCESS.2024.3386826.

[30] J. Terven, D.-M. Córdova-Esparza, and J.-A. Romero-González, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Machine Learning and Knowledge Extraction, vol. 5, no. 4, pp. 1680–1716, Nov. 2023, doi: 10.3390/make5040083.

[31] M. L. Ali and Z. Zhang, “The YOLO Framework: A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection,” Computers, vol. 13, no. 12, p. 336, Dec. 2024, doi: 10.3390/computers13120336.

[32] U. Sirisha, S. P. Praveen, P. N. Srinivasu, P. Barsocchi, and A. K. Bhoi, “Statistical Analysis of Design Aspects of Various YOLO-Based Deep Learning Models for Object Detection,” International Journal of Computational Intelligence Systems, vol. 16, no. 1, p. 126, Aug. 2023, doi: 10.1007/s44196-023-00302-w.

[33] M. Hussain, and R. Khanam, “In-Depth Review of YOLOv1 to YOLOv10 Variants for Enhanced Photovoltaic Defect Detection,” Solar, vol 4. no. (3), pp. 351–386, 2024, doi: 10.3390/solar4030016.

[34] Z. Hua et al., “A Benchmark Review of YOLO Algorithm Developments for Object Detection,” IEEE Access, vol. 13, pp. 123515–123545, 2025, doi: 10.1109/ACCESS.2025.3586673.

[35] J. Wei, A. As’arry, K. Anas Md Rezali, M. Zuhri Mohamed Yusoff, H. Ma, and K. Zhang, “A Review of YOLO Algorithm and Its Applications in Autonomous Driving Object Detection,” IEEE Access, vol. 13, pp. 93688–93711, 2025, doi: 10.1109/ACCESS.2025.3573376.

[36] S. Liang et al., “Edge YOLO: Real-Time Intelligent Object Detection System Based on Edge-Cloud Cooperation in Autonomous Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 12, pp. 25345–25360, Dec. 2022, doi: 10.1109/TITS.2022.3158253.

[37] F. A. A.- Ibraheemi et al., “A Cognitive Energy-Driven Routing Strategy for Ultra-Efficient Data Transfer in Wireless Sensor Networks,” Applied Data Science and Analysis, vol. 2025, pp. 131–143, Apr. 2025, doi: 10.58496/ADSA/2025/011.

[38] J. Li and J. Ye, “Edge-YOLO: Lightweight Infrared Object Detection Method Deployed on Edge Devices,” Applied Sciences, vol. 13, no. 7, p. 4402, Mar. 2023, doi: 10.3390/app13074402.

[39] T. Diwan, G. Anirudh, and J. V. Tembhurne, “Object detection using YOLO: challenges, architectural successors, datasets and applications,” Multimedia Tools and Applications, vol. 82, no. 6, pp. 9243–9275, Mar. 2023, doi: 10.1007/s11042-022-13644-y.

[40] A.E. abdelkareem, “Performance Analysis of Deep Learning based Signal Constellation Identification Algorithms for Underwater Acoustic Communications”, DJES, vol. 17, no. 3, pp. 1–14, Sep. 2024, doi: 10.24237/djes.2024.17301.

[41] X. Lv, T. Chen, C. Song, C. Yang, and T. Ping, “Application of YOLO Algorithm for Intelligent Transportation Systems: A Survey and New Perspectives,” International Journal of Distributed Sensor Networks, vol. 2025, no. 1, Jan. 2025, doi: 10.1155/dsn/2859040.

[42] S. Yassine and A. Stanulov, “A COMPARATIVE ANALYSIS OF MACHINE LEARNING ALGORITHMS FOR THE PURPOSE OF PREDICTING NORWEGIAN AIR PASSENGER TRAFFIC,” International Journal of Mathematics, Statistics, and Computer Science, vol. 2, pp. 28–43, Jul. 2023, doi: 10.59543/ijmscs.v2i.7851.

[43] M. Flores-Calero et al., “Traffic Sign Detection and Recognition Using YOLO Object Detection Algorithm: A Systematic Review,” Mathematics, vol. 12, no. 2, p. 297, Jan. 2024, doi: 10.3390/math12020297.

[44] A. B. Raharjo, F. Dumont, and E. Thibaudeau, “Comparative Analysis of YOLO-Based Object,” Advances in Computing and Data Sciences: 8th International Conference (ICACDS 2024), Vélizy, France, May 9–10, 2024, Revised Selected Papers, vol. 2194, p. 93, Springer Nature, Oct. 2024.

[45] Z. Yang, X. Lan, and H. Wang, “Comparative Analysis of YOLO Series Algorithms for UAV-Based Highway Distress Inspection: Performance and Application Insights,” Sensors, vol. 25, no. 5, p. 1475, Feb. 2025, doi: 10.3390/s25051475.

[46] L. Guo, H. Liu, Z. Pang, J. Luo, and J. Shen, “Optimizing YOLO Algorithm for Efficient Object Detection in Resource-Constrained Environments,” in 2024 IEEE 4th International Conference on Electronic Technology, Communication and Information (ICETCI), 2024, pp. 1358–1363. doi:10.1109/ICETCI61221.2024.10594419

[47] S. Liu, J. Zha, J. Sun, Z. Li and G. Wang, "EdgeYOLO: An Edge-Real-Time Object Detector," 2023 42nd Chinese Control Conference (CCC), Tianjin, China, 2023, pp. 7507-7512, doi: 10.23919/CCC58697.2023.10239786.

[48] A. Setyanto, T. B. Sasongko, M. A. Fikri, and I. K. Kim, “Near-Edge Computing Aware Object Detection: A Review,” IEEE Access, vol. 12, pp. 2989–3011, 2024, doi: 10.1109/ACCESS.2023.3347548.

[49] C. Hou, Z. Li, X. Shen, and G. Li, “Real‐time defect detection method based on YOLO‐GSS at the edge end of a transmission line,” IET Image Processing, vol. 18, no. 5, pp. 1315–1327, Apr. 2024, doi: 10.1049/ipr2.13028.

[50] X. Hu and H. Wen, “Research on Model Compression for Embedded Platform through Quantization and Pruning,” Journal of Physics: Conference Series, vol. 2078, no. 1, p. 012047, Nov. 2021, doi: 10.1088/1742-6596/2078/1/012047.

[51] A. Lopes, F. Pereira dos Santos, D. de Oliveira, M. Schiezaro, and H. Pedrini, “Computer Vision Model Compression Techniques for Embedded Systems: A Survey,” Computers & Graphics, vol. 123, p. 104015, Oct. 2024, doi: 10.1016/j.cag.2024.104015.

[52] P. V. Dantas, W. Sabino da Silva, L. C. Cordeiro, and C. B. Carvalho, “A comprehensive review of model compression techniques in machine learning,” Applied Intelligence, vol. 54, no. 22, pp. 11804–11844, 2024, doi: 10.1007/s10489-024-05747-w.

[53] N. Ma, X. Zhang, H. T. Zheng, and J. Sun, “ShuffleNet V2: Practical guidelines for efficient CNN architecture design,” in Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., : Springer, Cham, Switzerland, 2018, vol. 11218, Lecture Notes in Computer Science, doi:10.1007/978-3-030-01264-9_8.

[54] S. Muhammad Yasir and H. Ahn, “Faster Metallic Surface Defect Detection Using Deep Learning with Channel Shuffling,” Computers, Materials & Continua, vol. 75, no. 1, pp. 1847–1861, 2023, doi: 10.32604/cmc.2023.035698.

[55] F. W. Feng Wang, J. Z. Feng Wang, J. Z. Jing Zheng, X. Z. Jiawei Zeng, and Z. L. Xincong Zhong, “S2F-YOLO: An Optimized Object Detection Technique for Improving Fish Classification,” 網際網路技術學刊, vol. 24, no. 6, pp. 1211–1220, Nov. 2023,doi:10.53106/160792642023112406004.

[56] W. Gong, “Lightweight Object Detection: A Study Based on YOLOv7 Integrated with ShuffleNetv2 and Vision Transformer,” arXiv preprint, arXiv:2403.0173, Mar. 2024.

[57] J. Su, M. Yang, and X. Tang, “Integration of ShuffleNet V2 and YOLOv5s Networks for a Lightweight Object Detection Model of Electric Bikes within Elevators,” Electronics (Basel), vol. 13, no. 2, p. 394, Jan. 2024, doi: 10.3390/electronics13020394.

[58] A. He et al., "ALSS-YOLO: An Adaptive Lightweight Channel Split and Shuffling Network for TIR Wildlife Detection in UAV Imagery," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 17308-17326, 2024, doi: 10.1109/JSTARS.2024.3461172.

[59] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv preprint, arXiv:1704.04861, Apr. 2017.

[60] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. -C. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

[61] A. Howard et al., "Searching for MobileNetV3," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 1314-1324, doi: 10.1109/ICCV.2019.00140.

[62] D. Qin et al., “MobileNetV4: Universal models for the mobile ecosystem,” in Computer Vision – ECCV 2024, A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, and G. Varol, Eds., Springer, Cham, Switzerland, 2025, vol. 15098, Lecture Notes in Computer Science, doi:10.1007/978-3-031-73661-2_5.

[63] K. Yu, G. Tang, W. Chen, S. Hu, Y. Li, and H. Gong, “MobileNet-YOLO v5s: An Improved Lightweight Method for Real-Time Detection of Sugarcane Stem Nodes in Complex Natural Environments,” IEEE Access, vol. 11, pp. 104070–104083, 2023, doi: 10.1109/ACCESS.2023.3317951.

[64] Y. Li, X. Ma, and J. Wang, “Pineapple maturity analysis in natural environment based on MobileNet V3-YOLOv4,” Smart Agriculture, vol. 5, no. 2, pp. 35–44, 2023, doi: 10.12133/j.smartag.SA202211007.

[65] L. Jia et al., “MobileNet-CA-YOLO: An Improved YOLOv7 Based on the MobileNetV3 and Attention Mechanism for Rice Pests and Diseases Detection,” Agriculture, vol. 13, no. 7, p. 1285, Jun. 2023, doi: 10.3390/agriculture13071285.

[66] X. Lang, Z. Ren, D. Wan, Y. Zhang, and S. Shu, “MR-YOLO: An Improved YOLOv5 Network for Detecting Magnetic Ring Surface Defects,” Sensors, vol. 22, no. 24, p. 9897, Dec. 2022, doi: 10.3390/s22249897.

[67] Y. Qiaomei, C. Tingting, Y. Yongbang, and L. Hua, “Research on the application of lightweight YOLO model in detection of small crop diseases and pests,” Journal of Chinese Agricultural Mechanization, vol. 45, no. 9, p. 265, 2024.

[68] H. Liu, W. Cheng, C. Li, Y. Xu, and S. Fan, “Lightweight Detection Model RM-LFPN-YOLO for Rebar Counting,” IEEE Access, vol. 12, pp. 3936–3947, 2024, doi: 10.1109/ACCESS.2024.3349978.

[69] U. K. Altaie, A. E. Abdelkareem, and A. Alhasanat, “YOLO optimization for edge AI: A lightweight approach for deep sky object detection,” Iraqi Journal of Information and Communication Technology, vol. 8, no. 2, pp. 13–28, 2025.

[70] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, “GhostNet: More Features From Cheap Operations,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2020, pp. 1577–1586. doi: 10.1109/CVPR42600.2020.00165.

[71] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.

[72] J. Cao, W. Bao, H. Shang, M. Yuan, and Q. Cheng, “GCL-YOLO: A GhostConv-Based Lightweight YOLO Network for UAV Small Object Detection,” Remote Sensing (Basel), vol. 15, no. 20, p. 4932, Oct. 2023, doi: 10.3390/rs15204932.

[73] J. Chen, X. Yu, Q. Li, W. Wang, and B.-G. He, “LAG-YOLO: Efficient road damage detector via lightweight attention ghost module,” Journal of Intelligent Construction, vol. 2, no. 1, p. 9180032, Mar. 2024, doi: 10.26599/JIC.2023.9180032.

[74] H. Yan, Y. He, C. Cai, and Y. Zhang, “A lightweight target detection network: ghost-YOLONet,” in International Conference on Remote Sensing, Mapping, and Image Processing (RSMIP 2024), vol. 13167, pp, 868-872, SPIE, 2024.

[75] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO Series in 2021,” arXiv preprint, arXiv:2107.08430, Jul. 2021.

[76] W. Wang and W. Yu, "Enhancing Real-Time Road Object Detection: The RD-YOLO Algorithm With Higher Precision and Efficiency," IEEE Access, vol. 12, pp. 190876-190888, 2024, doi: 10.1109/ACCESS.2024.3518208.

[77] Y. Lei, D. Pan, Z. Feng, and J. Qian, “Lightweight YOLOv5s Human Ear Recognition Based on MobileNetV3 and Ghostnet,” Applied Sciences, vol. 13, no. 11, p. 6667, May 2023, doi: 10.3390/app13116667.

[78] B. Jacob et al., “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2704--2713.

[79] M. Nagel, M. Fournarakis, R. A. Amjad, Y. Bondarenko, M. van Baalen, and T. Blankevoort, “A White Paper on Neural Network Quantization,”, arXiv preprint, arXiv:2106.08295, Jun. 2021.

[80] A. K. Al-Zihairy and A. E. Abdelkareem, "Optimizing YOLOv8-cls: A Step Towards Smarter Edge Environments," 2024 1st International Conference on Emerging Technologies for Dependable Internet of Things (ICETI), Sana'a, Yemen, 2024, pp. 1-6, doi: 10.1109/ICETI63946.2024.10777236.

[81] B. Gunay, S. B. Okcu, and H. S. Bilge, “LPYOLO: Low Precision YOLO for Face Detection on FPGA,” in The 8th World Congress on Electrical Engineering and Computer Systems and Sciences (EECSS’22), Prague, Czech Republic, Jul. 2022. doi: 10.11159/mvml22.108.

[82] P. Cui, J. Zhang, B. Han, and Y. Wu, “Performance evaluation and model quantization of object detection algorithm for infrared image,” in Seventh Asia Pacific Conference on Optics Manufacture and 2021 International Forum of Young Scientists on Advanced Optical Manufacturing (APCOM and YSAOM 2021), J. Tan, X. Luo, M. Huang, L. Kong, and D. Zhang, Eds., SPIE, Feb. 2022, p. 118. doi: 10.1117/12.2614541.

[83] M. Wang et al., “Q-YOLO: Efficient Inference for Real-Time Object Detection,” Pattern Recognition (ACPR 2023), H. Lu, M. Blumenstein, SB. Cho, CL. Liu, L. Yagi, and T. Kamiya, Eds., Springer, Cham, 2023, vol. 14408, doi: 10.1007/978-3-031-47665-5_25

[84] X. Yue, H. Li, M. Shimizu, S. Kawamura, and L. Meng, “YOLO-GD: A Deep Learning-Based Object Detection Algorithm for Empty-Dish Recycling Robots,” Machines, vol. 10, no. 5, p. 294, Apr. 2022, doi: 10.3390/machines10050294.

[85] X. Liu, T. Wang, J. Yang, C. Tang, and J. Lv, “MPQ-YOLO: Ultra low mixed-precision quantization of YOLO for edge devices deployment,” Neurocomputing, vol. 574, p. 127210, Mar. 2024, doi: 10.1016/j.neucom.2023.127210.

[86] H. Cheng, M. Zhang, and J. Q. Shi, “A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–20, Aug. 2024, doi: 10.1109/TPAMI.2024.3447085.

[87] M. Zhu and S. Gupta, “To prune, or not to prune: exploring the efficacy of pruning for model compression,” arXiv preprint, arXiv:1710.01878, Oct. 2017.

[88] L. Malihi and G. Heidemann, “Matching the Ideal Pruning Method with Knowledge Distillation for Optimal Compression,” Applied System Innovation, vol. 7, no. 4, p. 56, Jun. 2024, doi: 10.3390/asi7040056.

[89] J. Jeon, J. Kim, J.-K. Kang, S. Moon, and Y. Kim, “Target Capacity Filter Pruning Method for Optimized Inference Time Based on YOLOv5 in Embedded Systems,” IEEE Access, vol. 10, pp. 70840–70849, 2022, doi: 10.1109/ACCESS.2022.3188323.

[90] H. Ahn et al., “SAFP-YOLO: Enhanced Object Detection Speed Using Spatial Attention-Based Filter Pruning,” Applied Sciences, vol. 13, no. 20, p. 11237, Oct. 2023, doi: 10.3390/app132011237.

[91] S. Zhang et al., “Edge Device Detection of Tea Leaves with One Bud and Two Leaves Based on ShuffleNetv2-YOLOv5-Lite-E,” Agronomy, vol. 13, no. 2, p. 577, Feb. 2023, doi: 10.3390/agronomy13020577.

[92] W. Zhou, J. Wang, J. Wang, M. Xi, Y. Song, and Z. Liu, “Mp-Yolo: Multidimensional Feature Fusion Based Layer Adaptive Pruning Yolo for Dense Vehicle Object Detection Algorithm,” 2024. doi: 10.2139/ssrn.4952235.

[93] L. Zhou and X. Liu, “MDPruner: Meta-Learning Driven Dynamic Filter Pruning for Efficient Object Detection,” IEEE Access, vol. 12, pp. 136925–136935, 2024, doi: 10.1109/ACCESS.2024.3464576.

[94] G. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a Neural Network,” arXiv preprint, arXiv:1503.02531, Mar. 2015,

[95] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge Distillation: A Survey,” International Journal of Computer Vision, vol. 129, no. 6, pp. 1789–1819, 2021, doi: 10.1007/s11263-021-01453-z.

[96] Z. Xing, X. Chen, and F. Pang, “DD‐YOLO: An object detection method combining knowledge distillation and Differentiable Architecture Search,” IET Computer Vision, vol. 16, no. 5, pp. 418–430, Aug. 2022, doi: 10.1049/cvi2.12097.

[97] Y. Liu et al., “An Improved Tuna-YOLO Model Based on YOLO v3 for Real-Time Tuna Detection Considering Lightweight Deployment,” Journal of Marine Science and Engineering, vol. 11, no. 3, p. 542, Mar. 2023, doi: 10.3390/jmse11030542.

[98] S. Lyu, Y. Zhao, X. Liu, Z. Li, C. Wang, and J. Shen, “Detection of Male and Female Litchi Flowers Using YOLO-HPFD Multi-Teacher Feature Distillation and FPGA-Embedded Platform,” Agronomy, vol. 13, no. 4, p. 987, Mar. 2023, doi: 10.3390/agronomy13040987.

[99] C. Dong, C. Pang, Z. Li, X. Zeng, and X. Hu, “PG-YOLO: A Novel Lightweight Object Detection Method for Edge Devices in Industrial Internet of Things,” IEEE Access, vol. 10, pp. 123736–123745, 2022, doi: 10.1109/ACCESS.2022.3223997.

[100] Q. Huang, C. Fan, Y. Sun, J. Huang, and W. Jiang, “A Lightweight YOLOv8s Algorithm for Ceiling Fan Blade Defect Detection With Optimized Pruning and Knowledge Distillation,” IEEE Access, vol. 13, pp. 97392–97408, 2025, doi: 10.1109/ACCESS.2025.3575952.

Downloads

Published

2025-12-10

How to Cite

[1]
“Lightweight Optimization of YOLO Models for Resource-Constrained Devices: A Comprehensive Review”, DJES, vol. 18, no. 4, pp. 1–26, Dec. 2025, doi: 10.24237/djes.2025.18401.

Similar Articles

1-10 of 182

You may also start an advanced similarity search for this article.