Intelligent emotional computing with deep convolutional neural networks: Multimodal feature analysis and application in smart learning environments
Naixin Zhang 1 , Wai Yie Leong 2 *
More Detail
1 Chengdu Jincheng College, Chengdu, CHINA2 INTI International University, 71800 Negeri Sembilan, MALAYSIA* Corresponding Author

Abstract

This study proposes an empathy-aware intelligent system for smart learning environments, integrating multimodal emotional cues such as facial expressions, heart rhythms, and digital behaviors through a deep convolutional neural network (CNN) architecture. The framework employs a dynamic attention mechanism to fuse heterogeneous features, enabling context-aware adaptation to learners’ emotional states. Validated via real-world classroom trials and public datasets including DAiSEE and Affective MOOC, the model achieves 85.3% accuracy in detecting subtle emotional fluctuations, outperforming conventional methods by 12-18% in scenario-specific adaptability. Educational experiments demonstrate significant improvements, with a 21% increase in learner engagement and 37% higher acceptance of personalized interventions. Compared to existing approaches such as single-modality support vector machine or static fusion models, our design introduces two innovations: dedicated CNN sub-networks for modality-specific feature extraction and self-attention-based dynamic fusion that prioritizes critical signals under varying learning contexts. These advancements bridge the gap between technical metrics and pedagogical relevance, transforming engagement analytics into actionable insights for responsive educational ecosystems.

License

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Article Type: Research Article

EURASIA J Math Sci Tech Ed, Volume 21, Issue 8, August 2025, Article No: em2680

https://doi.org/10.29333/ejmste/16661

Publication date: 01 Aug 2025

Online publication date: 28 Jul 2025

Article Views: 72

Article Downloads: 23

Open Access References How to cite this article