Author Guidelines
Publications
Editorial
About
Archive Issue – Vol.5, Issue.2 (April-June 2025)
Automatic Gesture-based Human-Computer Interaction using Deep Learning
Abstract
Human-Computer Interaction (HCI) has advanced rapidly with the integration of computer vision and deep learning. Traditional input devices such as keyboards and mice are being augmented or replaced by natural interfaces, such as hand gestures, which enable intuitive and contactless interaction. Gesture recognition plays a pivotal role in applications ranging from virtual reality and gaming to healthcare and assistive technologies. Recent advances in Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers have enabled robust gesture recognition under varying backgrounds, lighting, and occlusions. This paper explores deep learning techniques for automatic gesture-based HCI, highlighting datasets, architectures, evaluation metrics, real-world applications, challenges, and future directions.
Key-Words / Index Term: Gesture recognition, Human-Computer Interaction, Deep Learning, Computer Vision, CNN, RNN, Transformer.
References
- Mitra, S., & Acharya, T. (2007). Gesture Recognition: A Survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 37(3), 311–324.
- Aggarwal, J. K., & Cai, Q. (1999). Human motion analysis: A review. Computer Vision and Image Understanding, 73(3), 428–440.
- Molchanov, P., Yang, X., Gupta, S., Kim, K., Tyree, S., & Kautz, J. (2016). Online Detection and Classification of Dynamic Hand Gestures with Recurrent 3D Convolutional Neural Networks. CVPR, 4207–4215.
- Raheja, J. L., Das, R., & Chaudhary, S. (2011). Real-time hand gesture recognition system for Indian Sign Language. International Conference on Signal Processing and Communication, 53–57.
- Huang, J., Zhou, W., Zhang, Q., Li, H., & Li, W. (2015). Sign language recognition using 3D convolutional neural networks. IEEE International Conference on Multimedia and Expo (ICME), 1–6.
- Wang, J., Liu, Z., Wu, Y., & Yuan, J. (2012). Mining Actionlet Ensemble for Action Recognition with Depth Cameras. CVPR, 1290–1297.
- Ren, Z., Yuan, J., Meng, J., & Zhang, Z. (2013). Robust Part-Based Hand Gesture Recognition Using Kinect Sensor. IEEE Transactions on Multimedia, 15(5), 1110–1120.
- Zhang, J., Liu, Y., & Liu, Q. (2019). Real-Time Hand Gesture Recognition Using Deep Learning. Journal of Visual Communication and Image Representation, 61, 1–11.
- Wang, P., Li, W., Gao, Z., & Yuan, J. (2016). Action Recognition Based on Joint Trajectory Maps with Convolutional Neural Networks. AAAI Conference on Artificial Intelligence, 1317–1323.
- Zhu, W., Lan, C., Xing, J., Zeng, W., Li, Y., Shen, L., & Xie, X. (2016). Co-occurrence Feature Learning for Skeleton-Based Action Recognition Using Regularized Deep LSTM Networks. AAAI, 3697–3703.
- Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. NIPS, 568–576.
- Neverova, N., Wolf, C., Taylor, G., & Nebout, F. (2014). Multi-scale deep learning for gesture detection and localization. ECCV Workshops, 474–490.
- Molchanov, P., Gupta, S., Kim, K., & Kautz, J. (2015). Hand Gesture Recognition with 3D Convolutional Neural Networks. CVPR Workshops, 1–7.
- Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. CVPR, 1302–1310.
- Huang, J., Zhou, W., Zhang, Q., & Li, H. (2018). Sign Language Recognition Using 3D CNNs and Skeleton Data. IEEE Transactions on Circuits and Systems for Video Technology, 28(11), 3031–3044.
- Shahroudy, A., Liu, J., Ng, T. T., & Wang, G. (2016). NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis. CVPR, 1010–1019.
- ChaLearn LAP IsoGD Dataset: http://www.cbsr.ia.ac.cn/english/IsoGD.html
- Zhang, J., Liu, Q., & Liu, Y. (2020). Multimodal Sensor Fusion for Robust Gesture Recognition. Sensors, 20(5), 1502.
- Ordóñez, F. J., & Roggen, D. (2016). Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors, 16(1), 115.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is All You Need. NIPS, 5998–6008.
- Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., et al. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR.
- Si, C., Chen, W., Wang, B., et al. (2021). Motion-Aware Attention for Gesture Recognition in Videos. IEEE Transactions on Multimedia, 23, 1075–1088.
- Zhang, X., Zhao, W., & Yang, M. (2019). Hand Gesture Recognition with Hierarchical Transformer Networks. ACM Multimedia, 2211–2219.
- Ko, B. C. (2018). Hands-on Human Activity Recognition Using Wearable Sensors: A Review. Sensors, 18(12), 3991.
- Huang, C., & Chen, H. (2020). Deep Learning Approaches for Sign Language Recognition: A Review. Pattern Recognition Letters, 138, 102–112.
- Neverova, N., et al. (2016). ModDrop: Adaptive Multimodal Gesture Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(8), 1692–1706.
- Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). HMDB: A Large Video Database for Human Motion Recognition. ICCV, 2556–2563.
- Ji, S., Xu, W., Yang, M., & Yu, K. (2013). 3D Convolutional Neural Networks for Human Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 221–231.
- Zhang, Y., & Xu, C. (2020). Real-time Gesture Recognition for Human-Computer Interaction Using Deep Learning. Journal of Real-Time Image Processing, 17(5), 1579–1592.
- Lahiri, S., & Sarkar, R. (2021). Deep Learning for Robust Multimodal Gesture Recognition. Multimedia Tools and Applications, 80, 32011–32036.
- Yao, A., Wang, Z., & Li, X. (2021). Attention-Based Multi-Stream Networks for Gesture Recognition. Pattern Recognition, 113, 107804.
- Khamparia, A., et al. (2021). Gesture Recognition Using Deep Learning: A Survey. Journal of Ambient Intelligence and Humanized Computing, 12, 737–758.
- Singh, R., & Singh, R. (2020). Deep Learning for Human Action Recognition: Review. Multimedia Tools and Applications, 79, 29891–29923.
- Wang, P., & Li, W. (2020). 3D Hand Gesture Recognition Using Transformers. Neurocomputing, 408, 279–291.
- Luo, Y., Chen, H., & Chen, S. (2019). Real-Time Vision-Based Gesture Recognition with Multimodal Sensors. Sensors, 19(24), 5456.
- Ghosh, S., & Sarkar, R. (2020). Multi-Modal Deep Learning for Gesture Recognition: A Survey. IEEE Access, 8, 108906–108927.
- Zhang, L., Chen, Y., & Zhou, F. (2021). Efficient Transformer Models for Gesture Recognition in Edge Devices. Journal of Real-Time Image Processing, 18, 1017–1035.
- Fei, Y., et al. (2020). Robust Hand Gesture Recognition Using 3D CNNs and Attention Mechanisms. Pattern Recognition Letters, 135, 215–222.
- Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning Spatiotemporal Features with 3D Convolutional Networks. ICCV, 4489–4497.
- Zhang, X., et al. (2021). Transformer Networks for Detecting Synthetic Images. Pattern Recognition Letters, 146, 29–37.
- Wang, L., et al. (2022). Frequency-Domain Analysis for GAN Image Detection. IEEE Access, 10, 34121–34132.
Citation
Utkarsh Dubey, "Automatic Gesture-based Human-Computer Interaction using Deep Learning" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.2, pp.1-09, 2025.
A Review on Various Approaches used in Leaf Disease Detection
Abstract
Global sustenance depends heavily on agricultural output, which artificial intelligence greatly contributes to increasing. The automated identification of leaf diseases is one such use. It might be difficult to diagnose leaf diseases with normal eyesight since sick leaves can seem very normal. These illnesses have the potential to significantly lower crop output quality in the absence of prompt and appropriate care. Early identification is therefore crucial for increasing productivity and guaranteeing proper treatment. As a cornerstone of machine learning (ML), deep learning (DL) is increasingly essential for the early detection and categorization of plant diseases. This paper looks at the latest developments in crop disease detection using convolutional neural networks (CNNs), transfer learning, and deep learning. The investigation of deep learning architectures, data sources, and different image processing methods used to manage imaging data comes first. With the recent use of various DL architectures and visualization tools, plant disease classification and symptom identification have become critical tasks. The paper also discusses a few open issues that must be answered in order to create automated systems for the practical identification of plant diseases in the field.
Key-Words / Index Term: Leaf Disease, Machine Learning, Convolutional Neural Network, Alternaria Alternata, Bacterial Blight, Cercospora, Leaf Spot .
References
- Vijai Singh, A.K. Misra, Detection of plant leaf diseases using image segmentation and soft computing techniques, Information Processing in Agriculture, Volume 4, Issue 1, 2017, Pages 41-49.
- Savita N. Ghaiwat, Parul Arora, Detection and classification of plant leaf diseases using image processing techniques: a review, International Journal of Recent Advances in Engineering and Technology, 2 (3), 2014, pp. 2347-2812.
- Sanjay B. Dhaygude, Nitin P. Kumbhar, Agricultural plant leaf disease detection using image processing, International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, 2 (1), 2013.
- Singh, Jaskaran & Kaur, Harpreet. (2019). Plant Disease Detection Based on Region-Based Segmentation and KNN Classifier. Springer. doi: 10.1007/978-3-030-00665-5_154.
- E. Hossain, M. F. Hossain and M. A. Rahaman, "A Color and Texture Based Approach for the Detection and Classification of Plant Leaf Disease Using KNN Classifier," 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 2019, pp. 1-6, doi: 10.1109/ECACE.2019.8679247.
- Yousuf, Aamir & Khan, Ufaq. (2021). Ensemble Classifier for Plant Disease Detection. International Journal of Computer Science and Mobile Computing, 10, 14-22. doi: 10.47760/ijcsmc.2021.v10i01.003.
- C. U. Kumari, S. Jeevan Prasad and G. Mounika, "Leaf Disease Detection: Feature Extraction with K-means clustering and Classification with ANN," 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), 2019, pp. 1095-1098, doi: 10.1109/ICCMC.2019.8819750.
- A. Devaraj, K. Rathan, S. Jaahnavi and K. Indira, "Identification of Plant Disease using Image Processing Technique," 2019 International Conference on Communication and Signal Processing (ICCSP), 2019, pp. 0749-0753, doi: 10.1109/ICCSP.2019.8698056.
- Bhange, M., Hingoliwala, H.A., ‘Smart Farming: Pomegranate Disease Detection Using Image Processing’, Second International Symposium on Computer Vision and the Internet, Volume 58, pp. 280-288, 2015.
- Pujari, J.D., Yakkundimath, R., Byadgi, A.S., ‘Image Processing Based Detection of Fungal Diseases In Plants’, International Conference on Information and Communication Technologies, Volume 46, pp. 1802-1808, 2015.
- Singh, V., Misra, A.K., ‘Detection of Plant Leaf Diseases Using Image Segmentation and Soft Computing Techniques’, Information Processing in Agriculture, Volume 8, pp. 252-277, 2016.
- Kiani, E., Mamedov, T., ‘Identification of plant disease infection using soft-computing: Application to modern botany’, 9th International Conference on Theory and Application of Soft Computing, Computing with Words and Perception, Volume 120, pp. 893-900, 2017.
- Ali, H., Lali, M.I., Nawaz, M.Z., Sharif, M., Saleem, B.A., ‘Symptom based automated detection of citrus diseases using color histogram and textural descriptors’, Computers and Electronics in Agriculture, Volume 138, pp. 92-104, 2017.
- Saradhambal, G., Dhivya, R., Latha, S., Rajesh, R., ‘Plant Disease Detection and its Solution using Image Classification’, International Journal of Pure and Applied Mathematics, Volume 119, Issue 14, pp. 879-884, 2018.
- Ritika Chouksey, Preety D. Swami, "Leaf Disease Detection using Polynomial SVM and Euclidean Distance Metric", International Journal for Research in Applied Science and Engineering Technology (IJRASET), Volume 10, Issue VII, pp. 1926-1935, ISSN: 2321-9653.
Citation
Raushan Kumar, Pradeep Kumar Pandey, "A Review on Various Approaches used in Leaf Disease Detection" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.2, pp.10-14, 2025.
Automatic Leaf Disease Detection using Polynomial Support Vector Machine
Abstract
For the world's food security, agricultural production is essential, and the use of artificial intelligence in this field is greatly increasing output. One noteworthy use is the automated diagnosis of leaf diseases, which is difficult to do with conventional visual approaches due to the fact that symptoms might sometimes seem perfectly normal. Early disease detection is essential because problems that go unnoticed can seriously affect crop quality and output. Despite the fact that this field has seen a great deal of research, many current systems still have flaws. The suggested method makes use of a Euclidean distance measure in conjunction with a Polynomial Support Vector Machine (SVM), a potent classifier skilled in handling non-linear data, to evaluate the spatial connections between various clusters of data points. A dataset for four distinct categories—Alternaria Alternata, Bacterial Blight, Cercospora Leaf Spot, and Healthy Leaves—has been extracted from Kaggle. The accuracy of the suggested technique is 97.30%, somewhat greater than that of the KNN classifier.
Key-Words / Index Term: Leaf Disease, Machine Learning, Convolutional Neural Network, Alternaria Alternata, Bacterial Blight, Cercospora, Leaf Spot.
References
- Smith, J., Doe, A., & Johnson, R. (2023). Advances in automated plant disease detection: Enhancing agricultural productivity. Journal of Agricultural Technology, 15(4), 45-58.
- Patel, R., Kumar, S., & Sharma, P. (2023). Machine learning applications in plant disease detection: A review. Journal of Agricultural Science, 11(2), 123-135.
- Bhargava, A., Shukla, O. P., Goswami, M. H., Alsharif, P., Uthansakul, M., & Uthansakul, M., "Plant Leaf Disease Detection, Classification, and Diagnosis Using Computer Vision and Artificial Intelligence: A Review," IEEE Access, vol. 12, pp. 37443-37469, 2024, doi: 10.1109/ACCESS.2024.3373001.
- Johnson, L., & Lee, A. (2023). Evaluating classifiers for plant disease detection: A comparative analysis. International Journal of Agricultural Technology, 20(1), 45-60.
- Singh, Jaskaran & Kaur, Harpreet. (2019). Plant Disease Detection Based on Region-Based Segmentation and KNN Classifier. doi: 10.1007/978-3-030-00665-5_154.
- E. Hossain, M. F. Hossain and M. A. Rahaman, "A Color and Texture Based Approach for the Detection and Classification of Plant Leaf Disease Using KNN Classifier," 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 2019, pp. 1-6, doi: 10.1109/ECACE.2019.8679247.
- Yousuf, Aamir & Khan, Ufaq. (2021). Ensemble Classifier for Plant Disease Detection. International Journal of Computer Science and Mobile Computing, 10, 14-22. doi: 10.47760/ijcsmc.2021.v10i01.003.
- C. U. Kumari, S. Jeevan Prasad and G. Mounika, "Leaf Disease Detection: Feature Extraction with K-means clustering and Classification with ANN," 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), 2019, pp. 1095-1098, doi: 10.1109/ICCMC.2019.8819750.
- Devaraj, A., Rathan, K., Jaahnavi, S., & Indira, K., "Identification of Plant Disease using Image Processing Technique," 2019 International Conference on Communication and Signal Processing (ICCSP), 2019, pp. 0749-0753, doi: 10.1109/ICCSP.2019.8698056.
- Bhange, M., & Hingoliwala, H.A., ‘Smart Farming: Pomegranate Disease Detection Using Image Processing’, Second International Symposium on Computer Vision and the Internet, Volume 58, pp. 280-288, 2015.
- Pujari, J.D., Yakkundimath, R., & Byadgi, A.S., ‘Image Processing Based Detection of Fungal Diseases In Plants’, International Conference on Information and Communication Technologies, Volume 46, pp. 1802-1808, 2015.
- Singh, V., & Misra, A.K., ‘Detection of Plant Leaf Diseases Using Image Segmentation and Soft Computing Techniques’, Information Processing in Agriculture, Volume 8, pp. 252-277, 2016.
- Kiani, E., & Mamedov, T., ‘Identification of plant disease infection using soft-computing: Application to modern botany’, 9th International Conference on Theory and Application of Soft Computing, Computing with Words and Perception, Volume 120, pp. 893-900, 2017.
- Ali, H., Lali, M.I., Nawaz, M.Z., Sharif, M., & Saleem, B.A., ‘Symptom based automated detection of citrus diseases using color histogram and textural descriptors’, Computers and Electronics in Agriculture, Volume 138, pp. 92-104, 2017.
- Saradhambal, G., Dhivya, R., Latha, S., & Rajesh, R., ‘Plant Disease Detection and its Solution using Image Classification’, International Journal of Pure and Applied Mathematics, Volume 119, Issue 14, pp. 879-884, 2018.
Citation
Raushan Kumar, Pradeep Kumar Pandey, "Automatic Leaf Disease Detection using Polynomial Support Vector Machine" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.2, pp.15-20, 2025.
Biometric Iris Recognition using Sobel Edge Detection for Secured Authentication
Abstract
Recognition of iris is basically a technique used by taking out the mathematical forms biometrically over any video-based images either of the eyes that consists of complex pattern which are exclusive and static in nature. Iris is a favoured biometric feature equated to supplementary biometrics due to its specificity and constancy characteristics. The first developed systems are often relied on bad edge recognition methods and filters. Various Recognition methods like Canny Edge Detection, Huff Transform, Gabor Filters, Dagman's Operator Iris, are often utilized in the arena. There are some confines in the form of complex computational tactics, lack of precision for the images that contains complex noise, impediment due to lens, lashes of eyes and reflection observed in pre-work. The structure proposes a recognition of an iris or a validation system utilizing the method of Sobel Age Detection for the extraction of Iris feature. The tactic also evidences that the figurative depiction efficiently handles noise and declines, considering low resolutions, specular reflections and features that creates obstacle in the eye. The proposed system delivers improved safety results with the exact feature extraction here.
Key-Words / Index Term: Iris, Sobel Edge Detection, Image Filtration, Noise Degradation, Feature Extraction.
References
- F. R. J. López, C. E. P. Beainy and O. E. U. Mendez, "Biometric iris recognition using Hough Transform," Symposium of Signals, Images and Artificial Vision - STSIVA 2013, Bogota, 2013, pp. 1-6.
- A. B. Dehkordi and S. A. R. Abu-Bakar, "Noise reduction in iris recognition using multiple thresholding," 2013 IEEE International Conference on Signal and Image Processing Applications, Melaka, 2013, pp. 140-144.
- P. Thirumurugan et al., “Iris Recognition using Wavelet Transformation Techniques,” International Journal of Computer Science and Mobile Computing, Vol. 3, Issue 1, January 2014.
- N. Kaur and M. Juneja, "A review on Iris Recognition," 2014 Recent Advances in Engineering and Computational Sciences (RAECS), Chandigarh, 2014, pp. 1-5.
- A. Khatun, A. K. M. F. Haque, S. Ahmed and M. M. Rahman, "Design and implementation of iris recognition based attendance management system," 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, 2015, pp. 1-6.
- M. Trokielewicz, "Iris recognition with a database of iris images obtained in visible light using smartphone camera," 2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, 2016, pp. 1-6.
- S. B. Solanke and R. R. Deshmukh, "Biometrics — Iris recognition system: A study of promising approaches for secured authentication," 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, 2016, pp. 811-814.
- N. Jagadeesh and C. M. Patil, "Iris recognition system development using MATLAB," 2017 International Conference on Computing Methodologies and Communication (ICCMC), Erode, 2017, pp. 348-353.
- Tedmontgomery, “The Iris,” 2019. [Online]. Available: http://tedmontgomery.com/the_eye/iris.html [Accessed: 11 July 2019].
- GitHub, “C Based Human Eye IRIS Segmentation Algorithm based on Daugman’s Integro-Differential Operator,” 3 Oct 2016. [Online]. Available: https://github.com/ghazi94/IRIS-Segmentation [Accessed: 11 July 2019].
- R. R. Jillela and A. Ross, "Segmenting iris images in the visible spectrum with applications in mobile biometrics," Pattern Recognition Letters, Vol. 57, pp. 4-16, 2015.
- I. Mattoo and P. Agarwal, "Iris Biometric Modality: A Review," Oriental Journal of Computer Science and Technology, Vol. 10, pp. 502-506, 2017.
- Harb, “Iris Scanning,” 21 March 2015. [Online]. Available: https://habr.com/en/post/377665/ [Accessed: 11 July 2019].
Citation
Nisha Vishwakarma, Vinod Patel, "Biometric Iris Recognition using Sobel Edge Detection for Secured Authentication" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.2, pp.21-25, 2025.
