Loading...

Elaris Computing Nexus

Elaris Computing Nexus


Intelligent Architecture Design for Deep Convolutional Neural Networks Using Layer Wise Optimization


Elaris Computing Nexus

Received On : 16 May 2025

Revised On : 30 July 2025

Accepted On : 16 August 2025

Published On : 02 September 2025

Volume 01, 2025

Pages : 132-142


Abstract

The recent technology progress in deep learning has resulted in significant breakthroughs in image classification. Nevertheless, it is a big challenge to run complicated deep neural networks (CNNs) on devices requiring minimal resources, such as smart phones or embedded sensors. The standard CNN models are usually too large, they demand too much memory, processing power, and have too many unnecessary parameters. This results in them not being practical to be used in edge computing applications. Current approaches to reducing the sizes of these models such as pruning or quantization tend to use the one-size-fits-all approach. They squeeze all layers in the same way, disregarding the fact that certain layers are of more importance to the accuracy of the network as compared to others. This often causes a significant decline in performance. We propose AdaLayerNet, which is a novel adaptive CNN architecture, to resolve these issues. Its major innovation is the fact that it is smart to allocate computing power and memory across the layers of the network depending on their significance. The system keeps the first layers that compute low-level features accurately whereas it is more aggressive in the pruning and quantization of subsequent layers, which are not highly significant. All these strategies are handled by an integrated optimization framework which makes sure that the complexity of the model is cut to the bare minimum without losing its fundamental abilities. To clarify such a process AdaLayerNet contains such visualization tools as an architecture ribbon and a layer fingerprint. These give the visual intuitive knowledge of the allocation of resources on the various layers. Our tests indicate that AdaLayerNet can be used to achieve a great trade-off between accuracy, speed, and memory usage. It offers a feasible and scalable framework to construct high-performance CNNs that can execute effectively with edge devices. This framework opens the way to producing smaller, more efficient, and interpretable deep learning models by showing the strength of layer-specific optimization.

Keywords

Adaptive Convolutional Neural Networks, Layer-Wise Optimization, Model Compression, Pruning and Quantization, Resource-Efficient Deep Learning.

  1. H. Hussain, P. S. Tamizharasan, and P. K. Yadav, “LCRM: Layer-Wise Complexity Reduction Method for CNN Model Optimization on End Devices,” IEEE Access, vol. 11, pp. 66838–66857, 2023, doi: 10.1109/access.2023.3290620.
  2. T. Chen, Y. Tan, Z. Zhang, N. Luo, B. Li, and Y. Li, “Dataflow optimization with layer-wise design variables estimation method for enflame CNN accelerators,” Journal of Parallel and Distributed Computing, vol. 189, p. 104869, Jul. 2024, doi: 10.1016/j.jpdc.2024.104869.
  3. T. Hascoet, Q. Febvre, W. Zhuang, Y. Ariki, and T. Takiguchi, “Layer-Wise Invertibility for Extreme Memory Cost Reduction of CNN Training,” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 2049–2052, Oct. 2019, doi: 10.1109/iccvw.2019.00258.
  4. J. Zhao, P. Dai, and Q. Zhang, “A Complexity Reduction Method for VVC Intra Prediction Based on Statistical Analysis and SAE-CNN,” Electronics, vol. 10, no. 24, p. 3112, Dec. 2021, doi: 10.3390/electronics10243112.
  5. J. Xie, C. Chen, and H. Long, “A Loss Reduction Optimization Method for Distribution Network Based on Combined Power Loss Reduction Strategy,” Complexity, vol. 2021, no. 1, Jan. 2021, doi: 10.1155/2021/9475754.
  6. X. Dong, B. Li, and Y. Song, “An Optimization Method for Pruning Rates of Each Layer in CNN Based on the GA-SMSM,” Mar. 2023, doi: 10.21203/rs.3.rs-2738666/v1.
  7. Y. Ding and D.-R. Chen, “Optimization Based Layer-Wise Pruning Threshold Method for Accelerating Convolutional Neural Networks,” Mathematics, vol. 11, no. 15, p. 3311, Jul. 2023, doi: 10.3390/math11153311.
  8. A. Guenanou and A. Houmat, “Optimum stacking sequence design of laminated composite circular plates with curvilinear fibres by a layer-wise optimization method,” Engineering Optimization, vol. 50, no. 5, pp. 766–780, Jul. 2017, doi: 10.1080/0305215x.2017.1347924.
  9. X. Gong, Y. Lu, Z. Zhou, and Y. Qian, “Layer-Wise Fast Adaptation for End-to-End Multi-Accent Speech Recognition,” Interspeech 2021, pp. 1274–1278, Aug. 2021, doi: 10.21437/interspeech.2021-1075.
  10. H.-T. Nguyen, S. Li, and C. C. Cheah, “A Layer-Wise Theoretical Framework for Deep Learning of Convolutional Neural Networks,” IEEE Access, vol. 10, pp. 14270–14287, 2022, doi: 10.1109/access.2022.3147869.
  11. K. T. Chung, C. K. M. Lee, Y. P. Tsang, C. H. Wu, and A. Asadipour, “Multi-objective evolutionary architectural pruning of deep convolutional neural networks with weights inheritance,” Information Sciences, vol. 685, p. 121265, Dec. 2024, doi: 10.1016/j.ins.2024.121265.
  12. S. K. Mishra, V. J. D. G. C, P. A. Maddi, N. M. Tanniru, and S. L. P. Manthena, “Enhancing Edge Intelligence with Layer-wise Adaptive Precision and Randomized PCA,” 2024 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), pp. 1–5, Jan. 2024, doi: 10.1109/assic60049.2024.10507942.
  13. S. Brockmann and T. Schlippe, “Optimizing Convolutional Neural Networks for Image Classification on Resource-Constrained Microcontroller Units,” Computers, vol. 13, no. 7, p. 173, Jul. 2024, doi: 10.3390/computers13070173.
  14. M. Hamouda and M. S. Bouhlel, “Modified Convolutional Neural Networks Architecture for Hyperspectral Image Classification (Extra‐Convolutional Neural Networks),” IET Image Processing, vol. 19, no. 1, Mar. 2021, doi: 10.1049/ipr2.12169.
  15. F. Indirli, A. C. Ornstein, G. Desoli, A. Buschini, C. Silvano, and V. Zaccaria, “Layer-wise Exploration of a Neural Processing Unit Compiler’s Optimization Space,” Proceedings of the 2024 10th International Conference on Computer Technology Applications, pp. 20–26, May 2024, doi: 10.1145/3674558.3674562.
  16. B. Wang, Y. Sun, B. Xue, and M. Zhang, “A hybrid differential evolution approach to designing deep convolutional neural networks for image classification,” Oct. 2020, doi: 10.26686/wgtn. 13158293.v1.
CRediT Author Statement

The author reviewed the results and approved the final version of the manuscript.

Acknowledgements

Authors thanks to Department of Modern Mechanics for this research support.

Funding

No funding was received to assist with the preparation of this manuscript.

Ethics Declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Availability of Data and Materials

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Author Information

Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.

Corresponding Author



Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit: https://creativecommons.org/licenses/by-nc-nd/4.0/

Cite this Article

Zhu Jiping, “Intelligent Architecture Design for Deep Convolutional Neural Networks Using Layer Wise Optimization”, Elaris Computing Nexus, pp. 132-142, 2025, doi: 10.65148/ECN/2025013.

Copyright

© 2025 Zhu Jiping. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.