Author Guidelines
Publications
Editorial
About
Archive Issue – Vol.5, Issue.1 (January-March 2025)
Automatic Helmet Rule Violation Detection using Deep Learning Approach
Abstract
Traffic rule violations, particularly non-compliance with helmet usage among two-wheeler riders, contribute significantly to road accidents and fatalities. Traditional monitoring methods rely heavily on manual inspection, which is both time-consuming and error-prone. Recent advances in computer vision and deep learning provide effective solutions for automatic detection of helmet rule violations. This paper presents a deep learning–based approach for detecting riders without helmets using convolutional neural networks (CNNs) and object detection models such as YOLO, Faster R-CNN, or SSD. The system processes surveillance camera footage, identifies motorcyclists, detects helmet presence, and flags violations. The proposed method enhances road safety enforcement by providing real-time, scalable, and accurate detection compared to conventional methods.
Key-Words / Index Term: Helmet Detection, Traffic Violation, Deep Learning, CNN, Object Detection, YOLO, Road Safety.
References
- World Health Organization, Global Status Report on Road Safety 2018, Geneva: WHO, 2018.
- A. Kumar, R. Singh, and P. Gupta, “Helmet Rule Compliance and Road Safety in India: A Statistical Analysis,” International Journal of Traffic and Transportation Engineering, vol. 9, no. 3, pp. 112–120, 2021.
- S. Sharma and M. Jain, “A review of automatic helmet detection techniques for intelligent traffic surveillance,” Procedia Computer Science, vol. 133, pp. 1047–1054, 2018.
- N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 886–893, 2005.
- C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, pp. 273–297, 1995.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.
- J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv preprint, arXiv:1804.02767, 2018.
- G. Jocher, A. Stoken, J. Borovec, et al., “YOLOv5: A Scalable Object Detection Model,” GitHub repository, 2020.
- W. Liu et al., “SSD: Single Shot MultiBox Detector,” European Conference on Computer Vision (ECCV), pp. 21–37, 2016.
- H. Thakur and V. Verma, “Deep Learning Based Helmet Detection for Road Safety,” International Journal of Computer Applications, vol. 182, no. 35, pp. 40–45, 2019.
- P. Dwivedi and S. Biswas, “Real-time Motorcycle Rider Helmet Detection using YOLO,” International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 1173–1178, 2020.
- A. Singh, R. Kumar, and M. Kaur, “Hybrid Object Detection and Classification Model for Helmet Violation Monitoring,” Multimedia Tools and Applications, vol. 80, no. 11, pp. 17023–17040, 2021.
- Kaggle, “Motorcycle Helmet Detection Dataset,” Kaggle Datasets, 2020. [Online]. Available: https://www.kaggle.com/
- Stanford AI Lab, “Stanford Cars Dataset,” 2013. [Online]. Available: https://ai.stanford.edu/~jkrause/cars/car_dataset.html
- Y. Xu et al., “Data augmentation for object detection using generative adversarial networks,” Neurocomputing, vol. 387, pp. 172–180, 2020.
- I. Goodfellow et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems (NeurIPS), pp. 2672–2680, 2014.
- M. Everingham et al., “The Pascal Visual Object Classes (VOC) Challenge,” International Journal of Computer Vision, vol. 88, pp. 303–338, 2010.
- T. S. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, pp. 861–874, 2006.
- R. Padilla, S. L. Netto, and E. A. da Silva, “A Survey on Performance Metrics for Object-Detection Algorithms,” International Conference on Systems, Signals, and Image Processing (IWSSIP), pp. 237–242, 2020.
- A. Mehta and D. Singh, “Performance Evaluation of YOLOv5 vs Faster R-CNN for Helmet Violation Detection,” Journal of Computer Vision and Pattern Recognition Research, vol. 4, no. 2, pp. 88–95, 2022.
- H. Li, Z. Wu, and Y. Hu, “Edge computing-based real-time helmet detection for traffic monitoring,” IEEE Internet of Things Journal, vol. 8, no. 14, pp. 11473–11482, 2021.
- M. Zhang et al., “Transformer-Based Models for Traffic Violation Detection,” IEEE Access, vol. 9, pp. 169432–169445, 2021.
Citation
Arun Pratap Singh, Utkarsh Dubey, "Automatic Helmet Rule Violation Detection using Deep Learning Approach" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.1, pp.1-05, 2025.
Automatic AI Generated Image Detection using Machine Learning
Abstract
The proliferation of generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and diffusion-based models has enabled the creation of highly realistic synthetic images, raising concerns in digital trust, cybersecurity, and misinformation. Automatic detection of AI-generated images has therefore become a critical research problem. Traditional forensic approaches relying on handcrafted features are insufficient to capture subtle artifacts introduced by modern generators. In this paper, we survey and propose machine learning-based frameworks for detecting AI-generated images, emphasizing convolutional neural networks (CNNs), frequency-domain analysis, and transformer-based architectures. The study includes a comprehensive discussion of benchmark datasets, preprocessing techniques, feature extraction strategies, and evaluation metrics. Experimental results demonstrate that hybrid architectures combining spatial and frequency-domain features with attention mechanisms provide robust performance across diverse generative models. Finally, we discuss current challenges, limitations, and future directions, including generalization to unseen generative models, adversarial robustness, and ethical considerations for deployment.
Key-Words / Index Term: AI-generated images, deep learning, generative adversarial networks, diffusion models, image forensics, convolutional neural networks, transformer, frequency-domain analysis, digital media authentication, deepfake detection.
References
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2672–2680.
- Kingma, D. P., & Welling, M. (2014). Auto-encoding variational Bayes. International Conference on Learning Representations (ICLR).
- Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33, 6840–6851.
- Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1819.
- Farid, H. (2009). Image forgery detection. IEEE Signal Processing Magazine, 26(2), 16–25.
- Lukás, J., Fridrich, J., & Goljan, M. (2006). Digital camera identification from sensor pattern noise. IEEE Transactions on Information Forensics and Security, 1(2), 205–214.
- Stamm, M. C., Wu, M., & Liu, K. J. R. (2013). Information forensics: An overview of the first decade. IEEE Access, 1, 167–200.
- Wang, T., Liu, M. Y., Zhu, J. Y., et al. (2018). High-resolution image synthesis and semantic manipulation with conditional GANs. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 8798–8807.
- Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1251–1258.
- He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778.
- Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations (ICLR).
- Rössler, A., Cozzolino, D., Verdoliva, L., et al. (2019). FaceForensics++: Learning to detect manipulated facial images. IEEE International Conference on Computer Vision (ICCV), 1–11.
- Dolhansky, B., Howes, R., Pflaum, B., et al. (2019). The Deepfake Detection Challenge (DFDC) dataset. arXiv:1910.08854.
- Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4401–4410.
- Yu, N., Li, X., Tan, W., & Yu, L. (2021). Generalizing AI-generated image detection to unseen GANs. IEEE Transactions on Information Forensics and Security, 16, 3954–3966.
- Gonzalez, R. C., & Woods, R. E. (2008). Digital Image Processing (3rd ed.). Pearson.
- Fridrich, J., Soukal, D., & Lukas, J. (2003). Detection of copy-move forgery in digital images. Digital Forensic Research Workshop (DFRWS), 1–6.
- Bayar, B., & Stamm, M. C. (2016). A deep learning approach to universal image manipulation detection using a new convolutional layer. ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec), 5–10.
- Wang, H., & Deng, W. (2021). Deep learning for image forensics: A survey. IEEE Transactions on Information Forensics and Security, 16, 545–567.
- Li, Y., Li, B., Liu, H., Li, J., & Lyu, S. (2020). CNN-generated images are surprisingly easy to spot… for now. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 8695–8704.
- Durall, R., Keuper, M., & Keuper, J. (2020). Unmasking deepfakes with simple features. IEEE International Conference on Computer Vision (ICCV) Workshops, 1–9.
- Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. IEEE International Conference on Computer Vision (ICCV), 2980–2988.
- Zhang, X., Wang, X., Qi, H., & Metaxas, D. (2020). Detecting GAN-generated images via saturating color channels. IEEE Transactions on Information Forensics and Security, 15, 3031–3044.
- Rossler, A., Cozzolino, D., Verdoliva, L., et al. (2020). FaceForensics++: Learning to detect manipulated facial images. IEEE Transactions on Information Forensics and Security, 15, 2–14.
- Li, Y., Lyu, S. (2019). Exposing deepfake videos by detecting face warping artifacts. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 46–55.
- Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.
- Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 586–595.
- Heusel, M., Ramsauer, H., Unterthiner, T., et al. (2017). GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Advances in Neural Information Processing Systems, 30, 6626–6637.
- Karras, T., Aittala, M., Hellsten, J., et al. (2021). Alias-free generative adversarial networks. NeurIPS, 34, 852–863.
- Kaggle. Deepfake Detection Challenge. Retrieved from https://www.kaggle.com/c/deepfake-detection-challenge
- Yang, X., Li, Y., & Lyu, S. (2021). Exposing GAN-synthesized faces using inconsistent corneal specular highlights. IEEE Transactions on Information Forensics and Security, 16, 3542–3555.
- Li, Y., Wang, X., & Lyu, S. (2021). Temporal consistency for deepfake video detection. IEEE Transactions on Information Forensics and Security, 16, 3586–3598.
Citation
Akrity Kumari, A.P.Singh, "Automatic AI Generated Image Detection using Machine Learning" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.1, pp.06-12, 2025.
Design and Implementation of Visual Object Tracking with Convolutional-RPN and PP-Yolo
Abstract
Visual tracking is an important exploration point in the field of computer vision technology. In the first frame it is required to put the target as per the size and location of the object in the reference of x and y co-ordinates and keep tracking the object in the upcoming frames till the last one. The aim of Visual Object Tracking is to automatically acquire the environment of the object in the ensuing video outlines. Visual tracking is now very useful for tracking various moving objects like football in a match, basketball, birds and many more for efficiently tracking the target for better decision making. In Artificial Intelligence; visual tracking is more challengeable because of instability of object in the frames. Conventional techniques are not efficient to deal with this kind of challenges. For tracking the object more efficiently; it has been targeted to achieve it through machine learning techniques. The main purpose of the system is to obtain the pattern of the object by object classification method and then after that tracking the object accordingly. System is depend on two distinct approaches first one is PP-Yolo which is an object classification method; it is associated with the tensorflow that helps to classify the object more precisely. Second one is C-RPN which is object tracking approach on the basis of pattern of the objects. Here the system has been challenges with various benchmarks such as Motion Blur, Low Resolution, Background Clutter, In-plane or Out-plane Rotation, Out of the view, Occlusions, Illumination variations, scale variation and fast motion. Yolo has been especially designed for object classification or object identification, it has great potential to identify the object at real time with high level of precision with high score rate. Yolo is bit associated with the tensorflow because tensorflow is the origin of object classification. All are the precompiled library that are generally utilized in python IDLE with better level of optimization. Object discovery is a computer vision strategy in which a product framework can recognize, find, and follow the object from a frame.
Key-Words / Index Term: Visual Tracking, Object Detection, C-RPN, PP-Yolo, Object Tracking, OTB50, OTB100, Feature Extraction, Pattern Recognition.
References
- Peter Mountney, Danail Stoyanov & Guang-Zhong Yang (2010). "Three-Dimensional Tissue Deformation Recovery and Tracking: Introducing techniques based on laparoscopic or endoscopic images." IEEE Signal Processing Magazine, 27(4), 14–24. doi:10.1109/MSP.2010.936728.
- Lyudmila Mihaylova, Paul Brasnett, Nishan Canagarajan & David Bull (2007). Object Tracking by Particle Filtering Techniques in Video Sequences. In: Advances and Challenges in Multisensor Data and Information. NATO Security Through Science Series, 8, Netherlands: IOS Press, pp. 260–268. ISBN 978-1-58603-727-7.
- VOT Challenges, Datasets, 2015. https://www.votchallenge.net/vot2016/dataset.html, Accessed: 13-Aug-2023.
- Yang, L., Zhou, H., Yuan, G., Xia, M., Chen, D., Shi, Z., Chen, E. (2023). SiamUT: Siamese Unsymmetrical Transformer-like Tracking. Electronics, 12, 3133. doi:10.3390/electronics12143133.
- H. Li, S. Wu, S. Huang, K. Lam & X. Xing (2019). Deep Motion-Appearance Convolutions for Robust Visual Tracking. IEEE Access, 7, 180451–180466. doi:10.1109/ACCESS.2019.2958405.
- Linyu Zheng, Ming Tang & Jinqiao Wang (2018). Learning Robust Gaussian Process Regression for Visual Tracking. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 1219–1225. doi:10.24963/ijcai.2018/170.
- Martin Danelljan, Luc Van Gool & Radu Timofte (2020). Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7183–7192.
- S. Yun, J. Choi, Y. Yoo, K. Yun & J. Y. Choi (2018). Action-Driven Visual Object Tracking With Deep Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2239–2252. doi:10.1109/TNNLS.2018.2801826.
- K. Chen & W. Tao (2018). Convolutional Regression for Visual Tracking. IEEE Transactions on Image Processing, 27(7), 3611–3620. doi:10.1109/TIP.2018.2819362.
- Zhang, D., Maei, H., Wang, X., & Wang, Y.-F. (2017). Deep Reinforcement Learning for Visual Object Tracking in Videos.
- J. F. Henriques, R. Caseiro, P. Martins & J. Batista (2015). High-Speed Tracking with Kernelized Correlation Filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 583–596. doi:10.1109/TPAMI.2014.2345390.
- D. S. Bolme, J. R. Beveridge, B. A. Draper & Y. M. Lui (2010). Visual object tracking using adaptive correlation filters. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2544–2550. doi:10.1109/CVPR.2010.5539960.
- Debi Dogra, Vishal Badri, Arun Majumdar, Shamik Sural, Jayanta Mukherjee, Suchandra Mukherjee & Arun Singh (2014). Video analysis of Hammersmith lateral tilting examination using Kalman filter guided multi-path tracking. Medical & Biological Engineering & Computing, 52. doi:10.1007/s11517-014-1178-2.
- S. Yun, J. Choi, Y. Yoo, K. Yun & J. Y. Choi (2017). Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1349–1358. doi:10.1109/CVPR.2017.148.
- Linyu Zheng, Ming Tang & Jinqiao Wang (2018). Learning robust Gaussian process regression for visual tracking. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), AAAI Press, 1219–1225.
- B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing & J. Yan (2019). SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4277–4286. doi:10.1109/CVPR.2019.00441.
- Xin Li, Qiao Liu, Nana Fan, Zikun Zhou, Zhenyu He & Xiao-yuan Jing (2020). Dual-regression model for visual tracking. Neural Networks, 132, 364–374. doi:10.1016/j.neunet.2020.09.011.
- B. Zhang, X. Zhang & J. Qi (2015). Support vector regression learning based uncalibrated visual servoing control for 3D motion tracking. In 34th Chinese Control Conference (CCC), 8208–8213. doi:10.1109/ChiCC.2015.7260942.
- T. Wang & W. Zhang (2016). The visual-based robust model predictive control for two-DOF video tracking system. In Chinese Control and Decision Conference (CCDC), 3743–3747. doi:10.1109/CCDC.2016.7531635.
- Djelal, N., Saadia, N., & Ramdane-Cherif, A. (2012). Target tracking based on SURF and image based visual servoing. In IEEE 2nd International Conference on Communications, Computing and Control Applications (CCCA), 1–5. doi:10.1109/ccca.2012.6417913.
- C. H. Li & T. I. James Tsay (2018). Robust Visual Tracking in Cluttered Environment Using an Active Contour Method. In 57th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 53–58. doi:10.23919/SICE.2018.8492705.
- Q. Guo, W. Feng, R. Gao, Y. Liu & S. Wang (2021). Exploring the Effects of Blur and Deblurring to Visual Object Tracking. IEEE Transactions on Image Processing, 30, 1812–1824. doi:10.1109/TIP.2020.3045630.
- H. Li & Y. W (2015). Object of interest tracking based on visual saliency and feature points matching. In International Conference on Wireless, Mobile and Multi-Media, 201–205.
- L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi & P. H. Torr (2016). Fully-convolutional siamese networks for object tracking. In ECCV Workshops.
- G. Bhat, J. Johnander, M. Danelljan, F. Shahbaz Khan & M. Felsberg (2018). Unveiling the power of deep tracking. In ECCV, September 2018.
- D. Bolme, J. Beveridge, B. Draper & Y. Lui (2010). Visual object tracking using adaptive correlation filters. In CVPR.
- L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff & H. Adam (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV.
- M. Danelljan, G. Bhat, F. Shahbaz Khan & M. Felsberg (2017). Eco: Efficient convolution operators for tracking. In CVPR.
Citation
Amit Saxena, Sitesh Kumar Sinha, Sanjeev Kumar Gupta, "Visual Effects (VFX) Using Deep Learning: A Comprehensive Review" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.1, pp.13-22, 2024.
A Review on Soil Moisture Detection and Plant Watering System in Smart Agriculture
Abstract
Soil moisture sensor is a novel device which detects the moisture content in the soil, and with reasonable system permits water to be flooded relying upon the moisture content of the soil. This permits stream of water or stoppage of water to the plants by utilizing a mechanized water system. Presently a day's water is turning out to be exceptionally valuable because of shortage in getting spotless water for homegrown reason including water system. To advance the utilization of water, system to foster water discussion is the need of great importance. Likewise, computerization in rural systems is a need to enhance water utilization, lessen water wastage, and to execute present day innovation in farming systems. Soil moisture sensor is a novel device which detects the moisture content in the soil, and with reasonable system permits water to be flooded relying upon the moisture content of the soil. This permits stream of water or stoppage of water to the plants by utilizing a mechanized water system. The device comprises of an Arduino board, which is the miniature regulator which initiates the water siphon and supplies water to plants through Rotating Platform Sprinkler. The intension of this paper is to review various implemented systems. There are various systems have been implemented that are usually based on Arduino UNO and various IoT based devices that may vary as per the costs.
Key-Words / Index Term: Soil Moisture Detection, Plant Watering System, Arduino UNO, Smart Agriculture, Relay, Motor.
References
- M. N. Umeh, N. N. Mbeledogu, S. O. Okafor, F. C. Agba (2015). Intelligent microcontroller-based irrigation system with sensors. American Journal of Computer Science and Engineering, 2(1), 1–4.
- A. Algeeb, A. Albagul, A. Asseni, O. Khalifa, O. S. Jomah (2010). Design and Fabrication of an Intelligent Irrigation Control System. In 14th WSEAS International Conference on Systems, Latest Trends on Systems, Volume II, 370–375.
- B. N. Getu, N. A. Hamad, H. A. Attia (2015). Remote Controlling of an Agricultural Pump System Based on the Dual Tone Multi-Frequency (DTMF) Technique. Journal of Engineering Science & Technology (JESTEC), 10(10).
- H. A. Attia, B. N. Getu, N. A. Hamad (2015). Experimental Validation of DTMF Decoder Electronic Circuit to be used for Remote Controlling of an Agricultural Pump System. In International Conference on Electrical and Bio-medical Engineering, Clean Energy and Green Computing (EBECEGC2015), 52–57.
- N. D. Kumar, S. Pramod & C. Sravani (2013). Intelligent Irrigation System. International Journal of Agricultural Science and Research (IJASR), 3(3), 23–30.
- S. Devabhaktuni, D. V. Pushpa Latha (2013). Soil moisture and temperature sensor based intelligent irrigation water pump controlling system using PIC 16F72 Microcontroller. International Journal of Emerging Trends in Engineering and Development, 4(3), 101–107.
- V. S. Kuncham, N. V. Rao (2014). Sensors for Managing Water Resources in Agriculture. IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), 9(2), 145–163.
- C. C. Shock, F. X. Wang (2011). Soil Water Tension, a Powerful Measurement for Productivity and Stewardship. HortScience, 46(2), 178–185.
- Hackters. Automatic Watering System for My Plants. https://www.hackster.io/lc_lab/automatic-watering-system-for-my-plants-b73442
- Watermark 200SS soil moisture sensor specification manual. Available at http://www.irrometer.com/sensors.html
- B. N. Getu & H. A. Attia (2015). Automatic control of agricultural pumps based on soil moisture sensing. AFRICON 2015, 1–5. doi:10.1109/AFRCON.2015.7332052
- Syed Ahmed, B. Kovela, V. Gunjan (2020). IoT Based Automatic Plant Watering System Through Soil Moisture Sensing—A Technique to Support Farmers’ Cultivation in Rural India. doi:10.1007/978-981-15-3125-5_28
- Nu, Yin; Lwin, San; Maw, Win (2019). Automatic Plant Watering System using Arduino UNO for University Park. International Journal of Trend in Scientific Research and Development, 3, 902–906. doi:10.31142/ijtsrd23714
- S. Bhardwaj, S. Dhir & M. Hooda (2018). Automatic Plant Watering System using IoT. In Second International Conference on Green Computing and Internet of Things (ICGCIoT), 659–663. doi:10.1109/ICGCIoT.2018.8753100
- G. Boopathi Raja, S. Purushotaman, K. Roshni, S. Sateesh Kumar, B. Ebika (2021). IoT Based Automatic Soil Moisturizer. International Journal of Engineering Research & Technology (IJERT), ICRADL Conference Proceedings.
- Siva Kotni, G. Raj, Bagubali Annasamy, Kishore Krishnan (2019). Smart watering of plants. 1–4. doi:10.1109/ViTECoN.2019.8899371
- M. R. Kiran Gowd, Sarah Mahin, Narmada K. L., Mohammed Adnan B. I., Dr. Sridhar S. (2020). Automatic Irrigation System Using Soil Moisture Sensor. Institute of Scholars (InSc). https://ssrn.com/abstract=3669704
- Bishnu Deo Kumar, Prachi Srivatsa, Reetika Agarwal, Vanya Tiwari (2011). Microcontroller Based Automatic Plant Irrigation System. International Research Journal of Engineering and Technology (IRJET), 4(5), 94–96.
- Pavithra D. S, M. S. Srinath (2019). GSM based Automatic Irrigation Control System for Efficient Use of Resources and Crop Planning with Android Mobile. IOSR Journal.
- Karan Kansara, Vishal Zaveri, Shreyans Shah, Sandip Delwadkar, Kaushal Jani (2015). Sensor-based Automated Irrigation System with IoT. International Journal of Computer Science and Information Technology, 6(6).
- Joaquin Gutierrez, Juan Francisco Villa-Medina, Alejandra Nieto-Garibay, Miguel Angel PortaGandara (2013). Automated Irrigation System Using a Wireless Sensor Network and GPRS Module. IEEE Transactions on Instrumentation and Measurement.
- Vandana Dubey, Nilesh Dubey, Shailesh Singh Chouchan (2013). Wireless Sensor Network based Remote Irrigation Control System and Automation using DTMF Code. IEEE Transaction on Communication Systems and Network Technologies.
- G. Nisha & J. Megala (2014). Wireless Sensor Network Based Automated Irrigation and Crop Field. In Sixth International Conference on Advanced Computing (ICoAC).
- Kavianand G, Nivas V M, Kiruthika R, Lalitha S (2016). Automated drip Irrigation machine. In IEEE International Conference on Technological Innovations in ICT for Agriculture and Rural Development.
- A. Nayak, G. Prakash & A. Rao (2014). Harnessing wind power to power sensor networks for agriculture. In Advances in Energy Conversion Technologies (ICAECT), 221–226.
- J. Gutiérrez, J. F. Villa-Medina, A. NietoGaribay, M. Á Porta-Gándara (2014). Automated Irrigation System Using a Wireless Sensor Network and GPRS Module. IEEE Transactions on Instrumentation and Measurement, 63(1), 166–176.
Citation
Vijya Raje Laxmi, Minal Saxena, "A Review on Soil Moisture Detection and Plant Watering System in Smart Agriculture" International Journal of Scientific Research in Technology & Management, Vol.5, Issue.1, pp.23-28, 2025.
