Optimizing Quantum Neural Networks: A Comparative Study of QNN Architectures and Training Techniques
Abstract
This study explores various architectures of Quantum Neural Networks (QNN) that utilize quantum computing to enhance machine learning model performance. The architectures examined include Hybrid Quantum Neural Network (H-QNN), Deep Quantum Neural Network (DQNN), Quantum Convolutional Neural Network (QCNN), and Coherent Feed Forward Quantum Neural Network. Additionally, the EfficientSU2 architecture demonstrated superior performance in accuracy and cost reduction. Optimization techniques such as Adaptive Moment Estimation (ADAM), Analytic Quantum Gradient Descent (AQGD), and Nakanishi-Fujii-Todo (NFT) were integrated to improve training efficiency. The study results show that EfficientSU2 with 10,000 iterations yielded the best results in terms of cost and accuracy across all tested datasets, with accuracy improvements reaching up to 97%. In conclusion, the EfficientSU2 architecture and the applied optimization techniques significantly enhanced the processing of complex data, paving the way for substantial advancements in quantum-based artificial intelligence applications.