Nature-Inspired Multi-Stage Event Detection Model with Optimized Feature-Based Learning for Video Datasets

  • Amar Pal Singh Student C.S Department Chandigharh University, Lucknow, Uttar Pradesh, India.

Abstract

Visual tracking is a crucial aspect of computer vision, with applications in various fields like robotics, video surveillance, human-computer interaction, autonomous cars, sports analytics. It involves estimating the state of an item being tracked, which is a challenging task due to nonvisibility changes. The object tracking algorithm is effective due to its use of features such as Grayscale, Gradient, Texture, Color, Fusion. Current studies focus on identifying events in real-time, sports, traffic, natural catastrophes, more. The most researched topics in computer vision include object popularity and localization in 3-D devices. Visible tracking aims to adjust an object in any environment, either by monitoring a single item or tracking multiple objects simultaneously. The EFS-linear MSVM approach is proposed for identifying multiple events within video sequences. The research aims to provide an appropriate approach for feature selection and classification strategy for identifying multiple events in YouTube videos. The EFS method is recommended for selecting the most useful feature subsets from the extracted vectors, the MSVM is provided with the best feature subsets for multi-class classification. Euclidean distance is used to obtain relevant events and activities. Item tracking is used to accurately interpret an object’s motion in a video.

References

1. Jhuo, I., Guangnan Ye, ShenghuaGao, Dong Liu, Yu-Gang Jiang, D. T. Lee, Shih-Fu Chang. “Discovering joint audio– visual codewords for video event detection.” Machine vision and applications 25, no. 1 (2014): 33-47.
2. Yu, Junqing, Aiping Lei, Yangliu Hu. “Soccer video event detection based on deep learning.” In International
Conference on Multimedia Modeling, pp. 377-389. nSpringer, Cham, 2019.
3. Lim, Mei Kuan, Szeling Tang, Chee Seng Chan. “iSurveillance: Intelligent framework for multiple events
detection in surveillance videos.” Expert Systems with Applications 41, no. 10 (2014): 4704-4715.
4. Wang, Xiaoyang, QiangJi. “Video event recognition with deep hierarchical context model.” In Proceedings of
the IEEE Conference on computer vision and pattern recognition, pp. 4418-4427. 2015.
5. Xu, D., & Chang, S. F. “Visual event recognition in news video using kernel methods with multi-level temporal
alignment.” In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8. IEEE, 2007..
6. Aggarwal, J. K., &Ryoo, M. S. “Human activity analysis: A review.” Acm Computing Surveys (Csur) 43, no. 3
(2011): 1-43..
7. Li, H., Achim, A., & Bull, D. «Unsupervised video anomaly detection using feature clustering.» IET signal
processing 6, no. 5 (2012): 521-533.
8. Alamuru, S., & Jain, S. «Video event detection, classification and retrieval using ensemble feature
selection.» Cluster Computing 24, no. 4 (2021): 2995- 3010.
9. Luo, M., Chang, X., & Gong, C. «Reliable shot identification for complex event detection via visualsemantic
embedding.» Computer Vision and Image Understanding 213 (2021): 103300.
10. Karbalaie, A., Abtahi, F., &Sjöström, M. “Event detection in surveillance videos: a review.” Multimedia Tools and
Applications (2022): 1-39.
11. Tseng, C. H., Hsieh, C. C., Jwo, D. J., Wu, J. H., Sheu, R. K., & Chen, L. C. “Person Retrieval in Video
Surveillance Using Deep Learning–Based Instance Segmentation.” Journal of Sensors 2021 (2021).
12. Wang, B., Yang, C., & Chen, Y. “Detection Anomaly in Video Based on Deep Support Vector Data
Description.” Computational Intelligence and Neuroscience 2022 (2022).
13. Bhuiyan, M. R., Abdullah, J., Hashim, N., & Al Farid, F.”Video analytics using deep learning for
crowd analysis: a review.” Multimedia Tools and Applications (2022): 1-28.
14. Xia, X., Togneri, R., Sohel, F., & Huang, D. «Random forest classification based acoustic event detection
utilizing contextual-information and bottleneck features.» Pattern Recognition 81 (2018): 1-13.
15. Tsagkatakis, G., Jaber, M., &Tsakalides, P. «Goal!! event detection in sports video.» Electronic Imaging 2017,
no. 16 (2017): 15-20. 16. Chasanis, V., &Likas, A. “Event detection and
classification in video surveillance sequences.” In Hellenic Conference on Artificial Intelligence, pp. 309-314. Springer, Berlin, Heidelberg, 2010.
17. Afyouni, I., Al Aghbari, Z., & Razack, R. A. “Multi-feature, multi-modal, multi-source social event detection: A
comprehensive survey.” Information Fusion 79 (2022): 279-308.
18. Jhuo, I.H., Ye, G., Gao, S., Liu, D., Jiang, Y.G., Lee, D.T., Chang, S.F «Discovering joint audio–visual codewords
for video event detection.» Machine vision and applications 25, no. 1 (2014): 33-47.
19. Yu, J., Lei, A., Hu, Y “Soccer video event detection based on deep learning.” In International Conference
on Multimedia Modeling, pp. 377-389. Springer, Cham, 2019.
20. Lim, M.K., Tang, S., Chan, C.S, “iSurveillance: Intelligent framework for multiple events detection in surveillance videos.” Expert Systems with Applications 41, no. 10 (2014): 4704-4715.
21. Xu, J., Denman, S., Reddy, V., Fookes, C., Sridharan, S, «Real-time video event detection in crowded scenes
using MPEG derived features: A multiple instance learning approach.» Pattern Recognition Letters 44
(2014): 113-125.
22. Wang, T., Qiao, M., Zhu, A., Shan, G., Snoussi, H, “Abnormal event detection via the analysis of multiframe
optical flow information.” Frontiers of Computer Science 14, no. 2 (2020): 304-313.
23. P. Over, G. Awad, M. Michel, J. Fiscus, G. Sanders, W. Kraaij, A. F. Smeaton, G. Quenot. “Trecvid 2013-
-an overview of the goals, tasks, data, evaluation mechanisms and metrics.” (2013).
24. Lai, K. T., Yu, F. X., Chen, M. S., & Chang, S. F. “Video event detection by inferring temporal instance labels.”
In Proceedings of the ieee conference on computer vision and pattern recognition, pp. 2243-2250. 2014.
25. Jia, Y. “Automatic Detection Algorithm of Football Events in Videos.” Computational Intelligence and
Neuroscience 2022 (2022).
26. Holder, R. P., & Tapamo, J. R. “Improved gradient local ternary patterns for facial expression
recognition.” EURASIP Journal on Image and Video Processing 2017, no. 1 (2017): 1-15.
27. Song, Q., Jiang, H., & Liu, J. “Feature selection based on FDA and F-score for multi-class classification.” Expert
Systems with Applications 81 (2017): 22-27.
28. Choi, S., & Jiang, Z. “Cardiac sound murmurs classification with autoregressive spectral analysis and
multi-support vector machine technique.” Computers in biology and medicine 40, no. 1 (2010): 8-20.
29. Wijnia, Lisette, Martine Baars. “The role of motivational profiles in learning problem-solving and self-assessment skills with video modeling examples.” Instructional Science 49, no. 1 (2021): 67-107.
30. Raval, Khushali R., Mahesh M. Goyani. “A survey on event detection based video summarization for
cricket.” Multimedia
Published
2023-08-08
How to Cite
PAL SINGH, Amar. Nature-Inspired Multi-Stage Event Detection Model with Optimized Feature-Based Learning for Video Datasets. Journal of Advanced Research in Image Processing and Applications, [S.l.], v. 6, n. 1, p. 9-15, aug. 2023. Available at: <http://thejournalshouse.com/index.php/image-pocessing-applications/article/view/780>. Date accessed: 22 dec. 2024.