Emotion-Driven Music Recommendation System
Main Article Content
Abstract
The "Enhanced Emotion-Driven Music Recommendation System" revolutionizes personalized music recommendations by leveraging real-time facial expression analysis. The project adopts a novel learning strategy, drawing upon established research in Convolutional Neural Networks (CNNs) for facial expression recognition. By employing a "divide-and-conquer" approach, integrating attention mechanisms, data augmentation techniques, and leveraging Haar cascade for facial detection, the system achieves remarkable precision in emotion detection. Seamlessly integrating background sound corresponding to detected emotions, the system offers a dynamic and immersive user experience.
Moving forward, the project aims to achieve even greater advancements by incorporating IoT devices to capture physiological indicators such as heart rate, thereby enriching adaptability and personalization. The "Enhanced Emotion-Driven Music Recommendation System" represents a noteworthy milestone in personalized music recommendation systems. It surpasses conventional methods by dynamically adjusting to users' emotional states in real-time.