Human Activity Recognition Using Feature Fusion
Main Article Content
Abstract
Human endeavour Since they are and will be used in many new innovations, not only for security and surveillance but also for understanding human behavioural patterns, recognition is becoming more crucial in today’s society. The goal of this research project is to create an intelligent model that can recognise human activity from video input from various sources, such as CCTV camera recorded video or YouTube video. Several techniques for identifying human activity have been described in recent years employing sensor-based datasets, depth, skeleton, and RGB (red, green, and blue) datasets. The majority of approaches for classifying activities using sensor-based and skeletal datasets have limitations in terms of feature representation, complexity, and performance. The provision of an effective and economical approach for human activity recognition utilising a video dataset, however, remains a difficult topic. In this research, we propose a frame processing derived from video files for action discrimination by capturing geographical information and temporal changes. To extract discriminative features, we perform transfer learning using pretrained models (VGG19 and DenseNet121 trained on the ImageNet dataset), and we assess the suggested approach using a number of fusion techniques. Using the UCF-50 dataset, our deep learning-based approach is effective. We achieve accuracy of between 95% and 98%.