Author Guidelines
Publications
Editorial
About
Archive Issue – Vol.2, Issue.3 (July-September 2022)
Biometric Finger Knuckleprint based Authentication System using Sobel Edge Detection & Emboss
Abstract
Researchers are always on the move to innovate something new from their side. Such a work by researchers in the field of biometrics has led to identify the finger knuckle print as a biometric trait with distinct features. There are certain biometric parts such as fingerprint, iris, palm print and now knuckle print. Knuckle contains rich texture that is distinct for each fingers it selves. Knuckle has potential information that can differentiate persons uniquely. System is intended to acquire the knuckle image and process it for data acquisition and generate code map. Code map is a template that localized in database and compare with input code maps. The proposed system is able to extract information from knuckle image with high precision using different kind of filters and image enhancement techniques such as Gabor, Spatial filters and Sobel that facilitate SURF (Speeded Up Robust Feature). Proposed system possess low error rate with zero false recognition recall. If a system has false acceptance rate then the precision does not follow ideal system. System should have zero false acceptance and high false rejection rate along with true acceptance. Precision is based on high quality feature extraction that could be made by some image enhancement techniques that proposed system follows.
Key-Words / Index Term: Knuckle Print, Sobel Edge Detection, SURF, Gabor Filter, Biometric and Binary Localization.
References
- K. Usha and M. Ezhilarasan, “Fusion of geometric and texture features for finger knuckle surface recognition,” Pattern Recognition Letters, vol. 55, no. 1, pp. 683–697, Mar. 2016.
- N. Deogaonkar, H. Kahar, B. Parab, S. Rajpure, and D. Bhosle, “Biometric authentication using finger knuckle print,” IOSR Journal of VLSI and Signal Processing (IOSR-JVSP), vol. 6, no. 1, pp. 55–59, Jan.–Feb. 2016.
- A. Amraoui, Y. Fakhri, and M. Ait Kerroum, “Finger knuckle print recognition system using compound local binary pattern,” in Proc. 3rd Int. Conf. Electrical and Information Technologies (ICEIT), IEEE, 2017.
- J. Kim, K. Oh, A. B.-J. Teoh, and K.-A. Toh, “Finger-knuckle-print for identity verification based on difference images,” in Proc. IEEE 11th Conf. Industrial Electronics and Applications (ICIEA), 2016, pp. 1031–1036.
- V. Arulalan and K. S. Joseph, “Score level fusion of iris and finger knuckle print,” in Proc. 10th Int. Conf. Intelligent Systems and Control (ISCO), IEEE, 2016, pp. 1–6.
- F. K. Nezhadian and S. Rashidi, “Inner-knuckle-print for human authentication by using ring and middle fingers,” in Proc. Int. Conf. Signal Processing and Intelligent Systems (ICSPIS), Tehran, Iran, Dec. 2016, pp. 1–5.
- E. O. Rodrigues, T. M. Porcino, A. Conci, and A. C. Silvah, “A simple approach for biometrics: Finger-knuckle prints recognition based on a Sobel filter and similarity measures,” in Proc. Int. Conf. Systems, Signals and Image Processing (IWSSIP), Bratislava, 2016, pp. 1–4.
- W. El-Tarhouni, L. Boubchir, and A. Bouridane, “Finger-knuckle-print recognition using dynamic thresholds completed local binary pattern descriptor,” in Proc. 39th Int. Conf. Telecommunications and Signal Processing (TSP), IEEE, 2016, pp. 1–5.
- I. S. Oveisi and M. Modarresi, “A feature level multimodal approach for palmprint and knuckleprint recognition using AdaBoost classifier,” in Proc. Int. Conf. Computing and Communication (IEMCON), Vancouver, BC, 2015, pp. 1–7.
- Steve Eddins, “Image binarization: new R2016a functions,” MathWorks Blogs, Apr. 16, 2016. [Online]. Available: https://blogs.mathworks.com/steve/2016/05/16/image-binarization-new-r2016a-functions/
- D. Zhang and M. S. Kamel, “An analysis of iriscode,” IEEE Trans. Image Processing, vol. 19, no. 2, pp. 522–532, Feb. 2010.
- S. Agarwal and P. Gupta, “Identification of human through palmprint: A review,” Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET), vol. 1, no. 10, pp. 1–19, 2012.
- X.-Y. Jing and D. Zhang, “A face and palmprint recognition approach based on discriminant DCT feature extraction,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 34, no. 6, pp. 2405–2415, Dec. 2004.
- D. Zhang, Z. Guo, G. Lu, L. Zhang, and W. Zuo, “An online system of multispectral palmprint verification,” IEEE Trans. Instrum. Meas., vol. 59, no. 2, pp. 480–490, Feb. 2010.
- A. Nigam and P. Gupta, “Finger-knuckle-print ROI extraction using curvature Gabor filter for human authentication,” in Proc. Int. Conf. Pattern Recognition Applications and Methods (ICPRAM), 2016, pp. 364–371. doi:10.5220/0005724103640371.
- W. K. Kong, D. Zhang, and W. Li, “Palmprint feature extraction using 2-D Gabor filters,” Pattern Recognition, vol. 36, no. 10, pp. 2339–2347, 2003.
- D. I. Devi and B. T. G. Sampantham, “An efficient security system based on Gabor feature detector,” in Proc. Int. Conf. Control, Automation, Communication and Energy Conservation (INCACEC), IEEE, 2009, pp. 1–6.
- W. Li, D. Zhang, and Z. Xu, “Image alignment based on invariant features for palmprint identification,” Signal Processing: Image Communication, vol. 18, no. 5, pp. 373–379, 2003.
- W. Jia, R.-X. Hu, J. Gui, Y. Zhao, and X.-M. Ren, “Palmprint recognition across different devices,” Sensors, vol. 12, no. 6, pp. 7938–7964, Jun. 2012.
- D. Zhang, V. Kanhangad, N. Luo, and A. Kumar, “Robust palmprint verification using 2D and 3D features,” Pattern Recognition, vol. 43, no. 1, pp. 358–368, Jan. 2010.
- K. Krishneswari and S. Arumugam, “A review on palm print verification system,” Int. J. Comput. Inf. Syst. Ind. Manage. Appl. (IJCISIM), vol. 2, pp. 113–120, 2010.
- Z. Guo, W. Zuo, L. Zhang, and D. Zhang, “Palmprint verification using consistent orientation coding,” in Proc. IEEE Int. Conf. Image Processing (ICIP), 2009, pp. 1985–1988.
- W. Li, B. Zhang, L. Zhang, and J. Yan, “Principal line-based alignment refinement for palmprint recognition,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 42, no. 6, pp. 1491–1499, Nov. 2012.
- M. Mu, Q. Ruan, and Y. Shen, “Palmprint recognition based on discriminative local binary patterns statistic feature,” in Proc. Int. Conf. Signal Acquisition and Processing (ICSAP), 2010, pp. 193–197.
- S. S. Khot, V. A. Mane, and K. P. Paradeshi, “Real time palm print identification technique – effective biometric identification technique,” Int. J. Societal Applications of Computer Science, vol. 1, no. 1, Nov. 2012.
Citation
Sonali Patel, Arun Jhapate "Biometric Finger Knuckleprint based Authentication System using Sobel Edge Detection & Emboss" International Journal of Scientific Research in Technology & Management, Vol.2, Issue.3, pp.1-06, 2022.
Tongue Peerless Pattern Recognition for Procreating Biometric Regime using Prewitt & Emboss Extraction
Abstract
In this digital world, it is required to be secured from various identity frauds that has been threatening to the society severely. Human can be identified on the basis of physiological parameters that can be a central dogma in the field of biometric authentication system. There are various biometric parameters such as fingerprint, Iris, knuckle, palm and now tongue. Tongue has labyrinthine patterns that is unique and cannot be forged easily. Tongue is a genetic independent and no two tongues have similar features. There is very fewer researches have been done in this field or it is in preliminarily stage. Tongue is a new and auxiliary biometric unit that can provide alginate impression which will become useful for forensic identification. It has been medically proved that tongue has unique features and it may vary according to the gender. The application of tongue biometric becomes useful in banking sectors, forensic examination and many more. It has morphological characteristics and can be observed from digital photography. Here the system proposed a reconciliation method for implementing tongue biometric using Prewitt Edge Detection and Emboss Kernel filtration. Prewitt is a tool that can explore predominant features of tongue and embossing highlighted the textures effectively for better recognition rate.
Key-Words / Index Term: Tongue Biometric, Labyrinthine Patters, Prewitt Edge Detection, Emboss, Kernel Method, Physiological Parameters.
References
- Jeddy N., Radhika T., and Nithya S., “Tongue prints in biometric authentication: A pilot study,” J Oral Maxillofac Pathol, vol. 21, pp. 176–179, 2017.
- Z. Liu, H. Wang, W. Jiang, and H. Zhuang, “Tongue verification with manifold learning,” in Proc. 7th Int. Conf. Natural Computation, 2011.
- R. Naaz, S. Yadav, and M. Diwakar, “Tongue image extraction technique from face and its application in public use system (Banking),” in Proc. Int. Conf. Communication Systems and Network Technologies, 2012.
- D. Zhang, Z. Liu, J. Yan, and P. Shi, “Tongue-print: A novel biometrics pattern,” in Advances in Biometrics (ICB 2007), Lecture Notes in Computer Science, vol. 4642. Springer, Berlin, Heidelberg, 2007.
- M. V. C. Caya, J. P. H. Durias, N. B. Linsangan, and W.-Y. Chung, “Recognition of tongue print biometrics using binary robust independent elementary features,” in Proc. IEEE 9th Int. Conf. Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2017.
- M. Godbole, B. Narang, S. Palaskar, S. Patil, and A. R. Bartake, “Tongue scanning as a biometric tool: A review article,” Int. J. Health Sci. Res., vol. 10, no. 4, Apr. 2020.
- V. Venkatesh, S. Kamath, N. Hasbullah, N. Mutalib, M. Nazeri, A. Putera, M. Tharmaseelan, J. Paula, and S. Yi, “A preliminary study of tongue prints for biometric authentication,” Shiraz E-Medical Journal, In Press, 2019. doi: 10.5812/semj.96173.
- Y. Xin, Y. Cao, Z. Liu, Y. Chen, L. Cui, Y. Zhu, … M. Wang, “Automatic tongue verification based on appearance manifold learning in image sequences for the Internet of Medical Things platform,” IEEE Access, pp. 1–1, 2018.
- S. Suryadevara, R. Naaz, S. Shweta, S. Kapoor, and A. Sharma, “Visual cryptography improvises the security of tongue as a biometric in banking system,” in Proc. 2nd Int. Conf. Computer and Communication Technology (ICCCT), 2011.
Citation
Arun Pratap Singh, Manish Manoria, Sunil Joshi, Sanjay Kumar Sharma, Amit Saxena, Utkarsh Dubey, "Tongue Peerless Pattern Recognition for Procreating Biometric Regime using Prewitt & Emboss Extraction" International Journal of Scientific Research in Technology & Management, Vol.2, Issue.3, pp.07-11, 2022.
A Review on Brain Tumor Classification using Distinct Approaches
Abstract
This review paper offers a thorough analysis of methods for detecting brain tumors, emphasizing the vital significance that an early and precise diagnosis plays in enhancing patient outcomes. Modern techniques for detecting brain tumors are crucial due to their rising occurrence worldwide. The study examines conventional imaging methods, highlighting the benefits and drawbacks of computed tomography (CT) and magnetic resonance imaging (MRI). It also explores the revolutionary effects of deep learning and machine learning techniques, especially convolutional neural networks (CNNs), on improving diagnostic precision. Hybrid models provide encouraging performance in segmentation and classification tasks by combining sophisticated algorithms with conventional imagery. The paper also addresses the difficulties caused by data availability and imaging technique variability, as well as the significance of histological testing in verifying tumor kinds and grades. Critical analysis of evaluation measures for evaluating detection performance offers insights into the efficacy of different approaches. The study also discusses new developments and avenues for future research, such as multimodal imaging and customized medicine, which have the potential to enhance detection skills even further. This review attempts to serve as a useful tool for researchers and clinicians in their continuous search for more potent brain tumor detection techniques by synthesizing the most recent information.
Key-Words / Index Term: CNN, Support Vector Machine, Brain Tumor, Segmentation, Cell Classification, Malignant, Benign, MRI, Brain Cells.
References
- Ostrom, Q. T., Gittleman, H., Liao, P., et al. (2020). “CBTRUS Statistical Report: Primary Brain and Other Central Nervous System Tumors diagnosed in the United States in 2013-2017.” Neuro-Oncology, 22(suppl_1), iv1–iv96.
- Smith, A. B., et al. (2019). “Advancements in MRI for Brain Tumor Detection.” Journal of Neuroimaging, 29(2), 123–135.
- Brown, T. C., et al. (2018). “The Role of Histopathology in Brain Tumor Diagnosis.” Pathology Insights, 12(1), 45–58.
- Zhang, Y., et al. (2021). “Machine Learning in Brain Tumor Detection.” Artificial Intelligence in Medicine, 34(4), 256–269.
- Gupta, P., et al. (2022). “Deep Learning for Brain Tumor Detection: A Review.” IEEE Transactions on Medical Imaging, 41(1), 89–103.
- Al-Badarneh, A., Najadat, H., & Al-Raziqi, A. (2012). “A classifier to detect tumor disease in MRI brain images.” ASONAM, 784–787. doi: 10.1109/ASONAM.2012.142
- Louis, D. N., Perry, A., Reifenberger, G., et al. (2016). “The World Health Organization classification of tumors of the central nervous system: a summary.” Acta Neuropathologica, 131(6), 803–820.
- Hoffmann, C., et al. (2021). “Primary and Secondary Brain Tumors: Clinical Characteristics and Treatment.” Journal of Neuro-Oncology, 154(3), 365–373.
- Wang, Z., et al. (2021). “CT Imaging for Brain Tumors: A Review.” Clinical Radiology, 76(3), 176–182.
- Gurbina, M., Lascu, M., & Lascu, D. (2019). “Tumor detection and classification of MRI brain images using different wavelet transforms and support vector machines.” Proc. 42nd Int. Conf. Telecommunications and Signal Processing (TSP), 505–508. doi: 10.1109/TSP.2019.8769040
- Jemimma, T. A., & Vetharaj, Y. J. (2018). “Watershed algorithm based DAPP features for brain tumor segmentation and classification.” Proc. Int. Conf. Smart Systems and Inventive Technology (ICSSIT), 155–158. doi: 10.1109/ICSSIT.2018.8748436
- Lavanyadevi, R., Machakowsalya, M., Nivethitha, J., & Niranjil, A. K. (2017). “Brain tumor classification and segmentation in MRI images using PNN.” Proc. IEEE ICEICE, 1–6. doi: 10.1109/ICEICE.2017.8191888
- Zaw, H. T., Maneerat, N., & Win, K. Y. (2019). “Brain tumor detection based on Naïve Bayes classification.” Proc. IEEE ICEAST, 1–4. doi: 10.1109/ICEAST.2019.8802562
- Ezhilarasi, R., & Varalakshmi, P. (2018). “Tumor detection in the brain using Faster R-CNN.” Proc. 2nd Int. Conf. I-SMAC, 388–392.
- Rao, L. J., Challa, R., Sudarsa, D., Naresh, C., & Basha, C. Z. (2020). “Enhanced automatic classification of brain tumours with FCM and CNN.” Proc. 3rd ICSSIT, 1233–1237. doi: 10.1109/ICSSIT48917.2020.9214199
- Choi, C., et al. (2020). “Advanced MRI Techniques for Brain Tumor Characterization.” Neuro-Oncology, 22(5), 651–661.
- Gupta, M., Sharma, S. K., & Sampada, G. C. (2023). “Classification of brain tumor images using CNN.” Computational Intelligence and Neuroscience, Article ID 2002855. doi: 10.1155/2023/2002855
- Hussain, M., et al. (2021). “Comparative Analysis of CNN Architectures for Brain Tumor Detection.” Journal of Medical Systems, 45(7), 1–12.
- Isensee, F., et al. (2017). “Automatic Brain Tumor Segmentation with a 3D U-Net Convolutional Neural Network.” MICCAI, LNCS 10435, 159–166.
- Patel, A., et al. (2023). “Hybrid Imaging Techniques for Enhanced Brain Tumor Detection.” Journal of Medical Imaging, 10(1), 25–36.
- Tiwari, R. K., et al. (2021). “Challenges in Deep Learning for Brain Tumor Detection.” Frontiers in Oncology, 11, 585492.
- Yamashita, R., et al. (2018). “Convolutional Neural Networks: An Overview and Applications in Medical Imaging.” Computerized Medical Imaging and Graphics, 66, 1–12.
- Cortes, C., & Vapnik, V. (1995). “Support-vector networks.” Machine Learning, 20(3), 273–297.
- Gupta, P., et al. (2020). “Wavelet-based feature extraction and SVM for brain tumor detection.” Biomedical Signal Processing and Control, 59, 101897.
- Sadeghi, F., et al. (2021). “Multi-center evaluation of SVM for brain tumor classification.” Artificial Intelligence in Medicine, 113, 101983.
- Wang, H., et al. (2022). “Hybrid SVM with genetic algorithm for glioma subtype classification.” Journal of Biomedical Informatics, 126, 103984.
- Khosravi, P., et al. (2020). “Parameter optimization of SVM for brain tumor classification.” Computer Methods and Programs in Biomedicine, 194, 105626.
- Vishwanathan, S. V. N., et al. (2010). “Support vector machines with multiple classes.” Machine Learning, 80(1), 73–100.
- Akin, O., et al. (2019). “MRI-based classification of brain tumors using support vector machines.” Expert Systems with Applications, 122, 67–76.
Citation
Arun Pratap Singh, Sanjay Kumar Sharma "A Review on Brain Tumor Classification using Distinct Approaches" International Journal of Scientific Research in Technology & Management, Vol.2, Issue.3, pp.12-17, 2022.
Underwater Image Restoration using Machine Learning Algorithm
Abstract
This work presents a comprehensive underwater image processing system designed for deep- sea research with the goal of mitigating the negative impacts of dynamic interference and water conditions. The method reconstructs precise underwater maps and improves image quality through a multi-stage process.The dark channel prior and an upgraded grey world method are initially used to provide contrast enhancement and color correction while accounting for underwater distortion. In order to guarantee the accuracy of the reconstructed image, dynamic interference, such as moving objects or disturbances, is then located and eliminated. An improved total variation model is used to balance resolution preservation with completeness, ensuring no important information is lost, in order to address blank areas left behind after interference removal. Ultimately, super-resolution is achieved by using an enhanced back-propagation network, which improves image details.
Key-Words / Index Term: Color balance, Histogram stretching, Fusion algorithm, underwater images, and Contrast optimization.
References
- Ramkumar, G., A. G., S. K. M., Ayyadurai, M., & S. C. (2021). “An Effectual Underwater Image Enhancement using Deep Learning Algorithm.” Proc. 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India.
- Fu, B., Wang, L., Wang, R., Fu, S., Liu, F., & Liu, X. (2020). “Underwater Image Restoration and Enhancement via Residual Two-Fold Attention Networks.” International Journal of Computational Intelligence Systems, 14(1).
- Cai, C., Zhang, Y., & Liu, T. (2019). “Underwater Image Processing System for Image Enhancement and Restoration.” Proc. IEEE 11th International Conference on Communication Software and Networks (ICCSN), Chongqing, China.
- Li, C., Guo, J., Chen, S., Tang, Y., Pang, Y., & Wang, J. (2016). “Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging.” Proc. IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA.
- Luo, W., Duan, S., & Zheng, J. (2021). “Underwater Image Restoration and Enhancement Based on a Fusion Algorithm With Color Balance, Contrast Optimization, and Histogram Stretching.” IEEE Access, 9.
- Sequeira, G., Mekkalki, V., Prabhu, J., Borkar, S., & Desai, M. (2021). “Hybrid Approach for Underwater Image Restoration and Enhancement.” Proc. International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India.
- Athira, O. K., & Babu, J. (2020). “Underwater Image Restoration using Scene Depth Estimation Technique.” Proc. 3rd International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India.
Citation
Sachin Tiwari, Shivani Malviya, Devansh Sen, Amit Saxena, Arun Pratap Singh "Underwater Image Restoration using Machine Learning Algorithm" International Journal of Scientific Research in Technology & Management, Vol.2, Issue.3, pp.18-22, 2022.
Zebra Crossing Rule Violation Recognition Using Fisher Vector Representation
Abstract
In a modern era of road safety and traffic management there are several signals and markers have assigned for pedestrian and vehicles, zebra crossing is one of them. Zebra crossing is used to provide a way where pedestrian can cross the roads as per the traffic signal regulations. It appears as stripes with alternating black and white lines as zebra looks. It is very necessary to draw it for pedestrian safety because there are so many casualties happen due to pedestrian crossing while in traffic. It can be finding around traffic signals where vehicles do not possess to cross zebra marking until the signal becomes green. But people violate this rule against zebra crossings that may culpable for them and procreate inconvenience for pedestrian. This violation can be recognized at real time and action can be taken accordingly to penalize violators. It can be done through Fisher Vector Representation statistical classifier that classifies the vectors as per the statistical data pertained from the input image. It can be used for vector representation with regression and classification. Fisher kernel is able to classify whether zebra crossing rule has been violated or not as per the sets of statistical situations.
Key-Words / Index Term: Zebra Crossing, Dense Classifier, Fisher Vector Representation, Statistical Data, Pedestrian Classification.
References
- The Telegraph. "Zebra Tears to educate motorists who ignore rule." Available: https://www.telegraphindia.com/states/bihar/zebra-tears-to-educatemotorists-who-ignore-rule/cid/1381568
- Ibadov, S., Ibadov, R., Kalmukov, B., & Krutov, V. (2017). “Algorithm for detecting violations of traffic rules based on computer vision approaches.” MATEC Web of Conferences, 132, 05005. DOI: 10.1051/matecconf/201713205005.
- Herumurti, D., Uchimura, K., Koutaki, G., & Uemura, T. (2013). “Urban Road Network Extraction Based on Zebra Crossing Detection from a Very High Resolution RGB Aerial Image and DSM Data.” Proc. International Conference on Signal-Image Technology & Internet-Based Systems, Kyoto, pp. 79-84.
- Álvarez, S., Llorca, D. F., & Sotelo, M. A. (2013). “Camera auto-calibration using zooming and zebra-crossing for traffic monitoring applications.” Proc. 16th International IEEE Conference on Intelligent Transportation Systems (ITSC), The Hague, pp. 608-613.
- Ahmetovic, D., Bernareggi, C., Gerino, A., & Mascetti, S. (2014). “ZebraRecognizer: Efficient and Precise Localization of Pedestrian Crossings.” Proc. 22nd International Conference on Pattern Recognition, Stockholm, pp. 2566-2571.
- Khaliluzzaman, M., & Deb, K. (2016). “Zebra-crossing detection based on geometric feature and vertical vanishing point.” Proc. 3rd International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, pp. 1-6.
- Narkhede, A., Nikam, V., Soni, A., & Sathe, A. (2017). “Automatic Traffic Rule Violation Detection and Number Plate Recognition.” IJSTE - International Journal of Science Technology & Engineering, 3(09), March.
- Rahman, A. M. M., Hossain, M. R., Mehdi, M. Q., Nirob, E. A., & Uddin, J. (2018). “An Automated Zebra Crossing using Arduino-UNO.” Proc. International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, pp. 1-4.
Citation
Priyankka Chaurasia, Arun Jhapate "Zebra Crossing Rule Violation Recognition Using Fisher Vector Representation" International Journal of Scientific Research in Technology & Management, Vol.2, Issue.3, pp.23-27, 2022.
