For Outstation Students, we are having online project classes both technical and coding using net-meeting software
For details, Call: 9886692401/9845166723
DHS Informatics provides academic projects based on IEEE Python Image Processing Projects with best and latest IEEE papers implementation. Below mentioned are the 2022 – 2023 best IEEE Python Image Processing Projects for CSE, ECE, EEE and Mechanical engineering students. To download the abstracts of Python domain project click here.
For further details call our head office at +91 98866 92401 / 98451 66723, we can send synopsis and IEEE papers based on students interest. For more details please visit our head office and get registered.IEEE Python Image Processing Projects
We believe in quality service with commitment to get our students full satisfaction.IEEE Python Image Processing Projects
We are in this service for more than 20 years and all our customers are delighted with our service.IEEE Python Image Processing Projects
Embedded Final year CSE projects in Bangalore
- Embedded Robotics Projects for M.tech Final Year Students
- Embedded IEEE Internet of Things Projects for B.E Students
- Embedded Raspberry PI Projects for B.E Final Year Students
- Embedded Automotive Projects for Final Year B.E Students
- Embedded Biomedical Projects for B.E Final Year Students
- Embedded Biometric Projects for B.E Final Year Students
- Embedded Security Projects for B.E Final Year
Abstract:
In the clinical setting, during the digital examination of histopathological slides, the pathologist annotates the slides by marking the rough boundary around the suspected tumor region. The marking or annotation is generally represented as a polygonal boundary that covers the extent of the tumor in the slide. These polygonal markings are difficult to imitate through CAD techniques since the tumor regions are heterogeneous and hence segmenting them would require exhaustive pixel-wise ground truth annotation. Therefore, for CAD analysis, the ground truths are generally annotated by pathologists explicitly for research purposes. However, this kind of annotation which is generally required for semantic or instance segmentation is time-consuming and tedious. In this proposed work, therefore, we have tried to imitate pathologists’ annotation by segmenting tumor extents by polygonal boundaries. For polygon-like annotation or segmentation, we have used Active Contours whose vertices or snake points move towards the boundary of the object of interest to find the region of minimum energy. To penalize the Active Contour we used modified U-Net architecture for learning penalization values. The proposed hybrid deep learning model fuses the modern deep learning segmentation algorithm with the traditional Active Contours segmentation technique. The model is tested against both state-of-the-art semantic segmentation and hybrid models for performance evaluation against contemporary work. The results obtained show that the pathologist-like annotation could be achieved by developing such hybrid models that integrate the domain knowledge through classical segmentation methods like Active Contours and global knowledge through semantic segmentation deep learning models.
Abstract:
With the increasing number of health issues reported due to obesity and overeating, people have become cautious about their dietary intake to prevent themselves from the diseases such as hypertension, diabetes, and other heart-related problem which are caused due to obesity. As per the data shared by WHO, at least 2.8 million people are dying each year because of being overweight or obese. The important part of any healthy diet plan is its calorie intake. Hence, we propose a deep learning-based technique to calculate the calories of the food items present in the image captured by the user. We used a layer-based approach to predict calories in the food item which includes Image Acquisition, Food item classification, Surface area detection, and calorie prediction.
Abstract:
Traffic sign recognition (TSR) is a significant research branch in the field of unmanned driving, which is very important for driverless driving and is often used to read permanent or temporary road signs on the roadside. Traffic sign detection (TSD) and traffic sign classification (TSC) constitute a complete recognition system. The paper mainly studies traffic sign recognition. Traffic sign recognition is mostly applied to portable devices, so the size and detection speed of the model are important factors to be considered. Under the condition of ensuring the speed, the detection accuracy of the model is guaranteed. The accuracy of the model designed in this paper on the German traffic sign recognition benchmark (GTSRB) is 99.30%, the parameter size is only 1.3M, and the trained network model is 4.0M. The results of the final experiment show that the network is valid for the classification of traffic signs.
Abstract:
With the ongoing popularization of online services, digital document images have been used in various applications. Meanwhile, there have emerged some deep learning-based text editing algorithms that alter the textual information of an image in an end-to-end fashion. In this work, we present a low-cost document forgery algorithm by the existing deep learning-based technologies to edit practical document images. To achieve this goal, the limitations of existing text editing algorithms regarding complicated characters and complex backgrounds are addressed by a set of network design strategies. First, the unnecessary confusion in the supervision data is avoided by disentangling the textual and background information in the source images. Second, to capture the structure of some complicated components, the text skeleton is provided as auxiliary information and the continuity in texture is considered explicitly in the loss function. Third, the forgery traces induced by the text editing operation are mitigated by some post-processing operations which consider the distortions from the print-and scan channel. Quantitative comparisons of the proposed method and the exiting approach have shown the advantages of our design by reducing the about 2/3 reconstruction error measured in MSE, and improving reconstruction quality measured in PSNR and in SSIM by 4 dB and 0.21, respectively. Qualitative experiments have confirmed that the reconstruction results of the proposed method are visually better than the existing approach in both complicated characters and complex textures. More importantly, we have demonstrated the performance of the proposed document forgery algorithm under a practical scenario where an attack is able to alter the textual information in an identity document using only one sample in the target domain. The forged and recaptured samples created by the proposed text editing attack and recapturing operation have successfully fooled some existing document authentication systems.
Abstract:
Emotion recognition from speech signals is an important but challenging component of Human-Computer Interaction (HCI). In the literature on speech emotion recognition (SER), many techniques have been utilized to extract emotions from signals, including well-established speech analysis and classification techniques. Deep Learning techniques have been recently proposed as an alternative to traditional techniques in SER. This paper presents an overview of Deep Learning techniques and discusses some recent literature where these methods are utilized for speech-based emotion recognition. The review covers databases used, emotions extracted, contributions made toward speech emotion recognition and limitations related to it.
Abstract:
The explosive growth of the massive video data brings great challenges to fast video deduplication. There is encouraging progress in deduplication techniques in the past few years, especially with the help of the binary hashing methods. However, till now there is rare work that studies the generic hash-based framework and the efficient similarity ranking strategy for video deduplication. This paper proposes a flexible and fast video deduplication framework based on hash codes, which supports the hash table indexing using any existing hashing algorithm, and ranks the candidate videos by exploring the similarities among the key frames over multiple tables. Our experiments on the popular large-scale dataset demonstrate that the proposed framework can achieve satisfying performance in the task of video deduplication.
Abstract:
With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for the finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric trait. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.
Abstract:
The application of deep learning methods to diagnose diseases has become a new research topic in the medical field. In the field of medicine, skin disease is one of the most common diseases, and its visual representation is more prominent compared with the other types of diseases. Accordingly, the use of deep learning methods for skin disease image recognition is of great significance and has attracted the attention of researchers. In this study, we review 45 research efforts on the identification of skin disease by using deep learning technology since 2016. We analyze these studies from the aspects of disease type, data set, data processing technology, data augmentation technology, model for skin disease image recognition, deep learning framework, evaluation indicators, and model performance. Moreover, we summarize the traditional and machine learning-based skin disease diagnosis and treatment methods. We also analyze the current progress in this eld and predict four directions that may become the research topic in the future. Our results show that the skin disease image recognition method based on deep learning is better than those of dermatologists and other computer-aided treatment methods in skin disease diagnosis, especially the multi deep learning model fusion method has the best recognition effect.
Abstract:
The novel corona-virus disease (COVID-19) pandemic has caused a major outbreak in more than 200 countries around the world, leading to a severe impact on the health and life of many people globally. By October 2020, more than 44 million people were infected, and more than 1,000,000 deaths were reported. Computed Tomography (CT) images can be used as an alternative to the time-consuming RT-PCR test, to detect COVID-19. In this work, we propose a segmentation framework to detect chest regions in CT images, which are infected by COVID-19. Architecture similar to a Unit model was employed to detect ground glass regions on a voxel level. As the infected regions tend to form connected components (rather than randomly distributed voxels), a suitable regularization term based on 2D-anisotropic total-variation was developed and added to the loss function. The proposed model is therefore called ”TV-Unet ”. Experimental results were obtained on a relatively large-scale CT segmentation dataset of around 900 images, incorporating this new regularization term leads to a 2% gain on overall segmentation performance compared to the Unet trained from scratch. Our experimental analysis, ranging from visual evaluation of the predicted segmentation results to quantitative assessment of segmentation performance (precision, recall, Dice score, and mIoU) demonstrated great ability to identify COVID-19 associated regions of the lungs.
Abstract:
The existing computer vision system on autonomous vehicles now is still dependent on an intensive computational system from GPU. Not only power-consuming, but it’s also taking a considerable space on a self-driving car. This paper attempts to design an efficient and low power consumption by utilizing a deep learning system on an embedded system that is raspberry pi 3 B+ in this case. It is more environmentally friendly and also using much more little space than the existing system. To be specific, this paper will implement a semantic segmentation task on an embedded system. The U-Net model for semantic segmentation will be optimized to require less computational system as possible by tuning down the hyperparameter needed. The segmentation task will be using the cityscapes dataset and using a total of 5 labels, i.e. human, road, vehicles, background, and a neutral class which will be neglected. The inference time needed to segments an image is 0,5s and the processing time on a video test is 1.8 FPS.
Abstract:
Medical image segmentation is regarded as an important component in a computer-aided diagnosis (CAD) system as it directly affects overall system performance. In this paper, we propose a new fully convolutional encoder-decoder model for lung segmentation named TransResUNet. We developed this architecture improving the state-of-the-art U-Net model. As part of the improvement to the classical U-Net, we introduced a pre-trained encoder, a special skip connection, and a post-processing module in the proposed architecture. The proposed TransResUNet achieved this feat with about 24% fewer parameters than the baseline U-Net. The implementation (based on Keras) of our proposed model is publicly available at https://sakibreza.github.io/TransResUNet with additional resources.
Abstract:
Semantic image segmentation is the process of labeling each pixel of an image with its corresponding class. An encoder-decoder-based approach, like U-Net and its variants, is a popular strategy for solving medical image segmentation tasks. To improve the performance of U-Net on various segmentation tasks, we propose a novel architecture called DoubleU-Net, which is a combination of two U-Net architectures stacked on top of each other. The first U-Net uses a pre-trained VGG-19 as the encoder, which has already learned features from ImageNet and can be transferred to another task easily. To capture more semantic information efficiently, we added another U-Net at the bottom. We also adopt Atrous Spatial Pyramid Pooling (ASPP) to capture contextual information within the network. We have evaluated DoubleU-Net using four medical segmentation datasets, covering various imaging modalities such as colonoscopy, dermoscopy, and microscopy. Experiments on the 2015 MICCAI sub-challenge on automatic polyp detection dataset, the CVC-ClinicDB, the 2018 Data Science Bowl challenge, and the Lesion boundary segmentation datasets demonstrate that the DoubleU-Net outperforms U-Net and the baseline models. Moreover, DoubleU-Net produces more accurate segmentation masks, especially in the case of the CVC-ClinicDB and 2015 MICCAI sub-challenge on automatic polyp detection dataset, which has challenging images such as smaller and flat polyps. These results show the improvement over the existing U-Net model. The encouraging results, produced on various medical image segmentation datasets, show that DoubleU-Net can be used as a strong baseline for both medical image segmentation and cross-dataset evaluation testing to measure the generalizability of Deep Learning (DL) models.
Abstract:
Currently, there is an urgent need for efficient tools to assess the diagnosis of COVID-19 patients. In this paper, we present feasible solutions for detecting and labeling infected tissues on CT lung images of such patients. Two structurally different deep learning techniques, SegNet and U-NET, are investigated for semantically segmenting infected tissue regions in CT lung images. We propose to use two known deep learning networks, SegNet and U-NET, for image tissue classification. SegNet is characterized as a scene segmentation network and U-net as a medical segmentation tool. Both networks were exploited as binary segments to discriminate between infected and healthy lung tissue, also as multi-class segments to learn the infection type on the lung. Each network is trained using seventy-two data images, validated on ten images, and tested against the left eighteen images. Several statistical scores are calculated for the results and tabulated accordingly. The results show the superior ability of SegNet in classifying infected/non-infected tissues compared to the other methods (with 0.95 mean accuracy), while the U-net shows better results as a multi-class segment (with 0.91 mean accuracy). Semantically segmenting CT scan images of COVID-19 patients is a crucial goal because it would not only assist in disease diagnosis, also help in quantifying the severity of the illness, and hence, prioritize the population treatment accordingly. We propose computer-based techniques that prove to be reliable as detectors for infected tissue in lung CT scans. The availability of such a method in today’s pandemic would help automate, prioritize, fasten, and broaden the treatment of COVID-19 patients globally.
Abstract:
Since the last few decades, the number of road causalities has seen continuous growth across the globe. Nowadays intelligent transportation systems are being developed to enable safe and relaxed driving and scene understanding of the surrounding environment is an integral part of it. While several approaches are being developed for semantic scene segmentation based on deep learning and Convolutional Neural Networks (CNN), these approaches assume well-structured road infrastructure and driving environment. We focus our work on the recent India Driving Lite Dataset (IDD), which contains data from an unstructured driving environment and was hosted as an online challenge in NCVPRIPG 2019. We propose a novel architecture named as Eff-UNet which combines the effectiveness of compound scaled EfficientNet as the encoder for feature extraction with UNet decoder for reconstructing the fine-grained segmentation map. High-level feature information, as well as low-level spatial information useful for precise segmentation, are combined. The proposed architecture achieved 0.7376 and 0.6276 mean Intersection over Union (mIoU) on validation and test dataset respectively and won first prize in IDD lite segmentation challenge outperforming other approaches in the literature.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Abstract:
With the rapid progress of recent years, techniques that generate and manipulate multimedia content can now provide a very advanced level of realism. The boundary between real and synthetic media has become very thin. On the one hand, this opens the door to a series of exciting applications in different fields such as creative arts, advertising, film production, video games. On the other hand, it poses enormous security threats. Software packages freely available on the web allow any individual, without special skills, to create very realistic fake images. These can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people. Therefore, there is an urgent need for automated tools capable of detecting false multimedia content and avoiding the spread of dangerous false information.
Abstract:
Autism is a developmental handicap of children that gets worse as they age. An autistic child has problems with interaction and communication, as well as limited behavior. If autistic children are diagnosed early, they can have a quality life by providing thorough care and therapy. However, in many developed countries, it is too late to diagnose children with autism. Besides, a trained medical expert is required to identify autism as there are no direct medical tests. Medical practitioners also take enough time to detect it because the children have to be monitored intensively. In this research,artificial intelligence algorithms have been utilized for detecting autism in children from images that are not viable for ordinary people. We have employed five different algorithms that are Multilayer Perceptron (MLP), Random Forest (RF), Gradient Boosting Machine (GBM), AdaBoost (AB) and Convolutional Neural Network (CNN) for classifying Autism Spectrum Disorder (ASD) in children. Therefore, we proposed a prediction model based on CNN, which can be used for detecting ASD, especially for children.
Abstract:
Remote sensing image scene classification, which aims at labeling remote sensing images with a set of semantic categories based on their contents, has broad applications in a range of fields. Propelled by the powerful feature learning capabilities of deep neural networks, remote sensing image scene classification driven by deep learning has drawn remarkable attention and achieved significant breakthroughs. However, to the best of our knowledge, a comprehensive review of recent achievements regarding deep learning for scene classification of remote sensing images is still lacking. To be specific, we discuss the main challenges of remote sensing image scene classification using Convolutional Neural Network-based remote sensing image scene classification methods, In addition, we introduce the image preprocessing technique used for remote sensing image scene classification and summarize the performance.
Abstract:
Communication is the fundamental channel to share thoughts. As of late the hard of hearing, stupid and visually impaired unfortunate casualties expanded. Since almost totally senseless can’t speak with ordinary individual. New situation is the place the idiotic, hard of hearing speak with visually impaired individuals. Signal acknowledgment is the scientific explanation of a human movement by a registering gadget. Gesture based communication give best correspondence stage to the consultation hindered and moronic individual to speak with ordinary individual. The Deaf and dumb use hand motions to convey though dazzle individuals can hear just the voice and correspondence through voice. So change of hand signals to voice yield is the arrangement. So as to draw a stage nearer to these objective applications we use CNN calculation with Deep learning and Tensor flow method. The thoughts comprised of structuring and actualize a framework utilizing man-made brainpower, picture preparing and information mining ideas to accept contribution as hand signals and produce unmistakable yields as content and voice with Good accuracy and above through proposed work.
Abstract
Deep learning has brought a series of breakthroughs in image processing. Specifically, there are significant improvements in the application of food image classification using deep learning techniques. However, very little work has been studied for the classification of food ingredients. Therefore, this paper proposes a new system, called DeepFood which not only extracts rich and effective features from a dataset of food ingredient images using deep learning but also improves the average accuracy of multi-class classification by applying advanced machine learning techniques. First, a set of transfer learning algorithms based on Convolutional Neural Networks (CNNs) are leveraged for deep feature extraction. Then, a multi-class classification algorithm is exploited based on the performance of the classifiers on each deep feature set.
Abstract:
At present, the existing abnormal event detection models based on deep learning mainly focus on data represented by a feature form, which pay little attention to the impact of the internal structure characteristics of feature vector. In addition, a single classifier is difficult to ensure the accuracy of classification. In order to address the above issues, we propose an abnormal event detection hybrid modulation method via feature expectation classification in video frames in this paper. Our main contribution is to calibrate the classification of a single classifier by constructing feature expectation. First, we employ convolutional neural network and long short-term memory models to extract the spatiotemporal features of video frame, and then construct the feature expectation for each key frame of every video, which could be used to capture the internal sequential and topological relational characteristics of structured feature. Second, we project expectation on the sparse vector to combine with a support vector classifier to calibrate the results of a linear support vector classifier. Finally, the experiments on a common dataset video dataset in comparison with some existing works demonstrate that the performance of the proposed method is better than several the state-of-the-art approaches.
Abstract
In recent years, due to the booming development of online social networks, fake news for various commercial and political purposes has been appearing in large numbers and widespread in the online world. With deceptive words, online social network users can get infected by these online fake news easily, which has brought about tremendous effects on the offline society already. An important goal in improving the trustworthiness of information in online social networks is to identify the fake news timely. This paper aims at investigating the principles, methodologies and algorithms for detecting fake news articles, creators and subjects from online social networks and evaluating the corresponding performance. This paper addresses the challenges introduced by the unknown characteristics of fake news and diverse connections among news articles, creators and subjects. This paper introduces a novel gated graph neural network, namely FAKEDETECTOR. Based on a set of explicit and latent features extracted from the textual information, FAKEDETECTOR builds a deep diffusive network model to learn the representations of news articles, creators and subjects simultaneously.
Contact:
+91-98451 66723
+91-98866 92401
Abstract
Object detection, which aims to automatically mark the coordinates of objects of interest in pictures or videos, isan extension of image classification. In recent years, it has been widely used in intelligent traffic management, intelligent monitoring systems, military object detection, and surgical linstrument positioning in medical navigation surgery, etc.COVID-19, a novel coronavirus outbreak at the end of 2019,poses a serious threat to public health. Many countries require everyone to wear a mask in public to prevent the spread of coronavirus
Contact:
+91-98451 66723
+91-98866 92401
Abstract- In developing or poor countries, it is not the easy job to discard the Tuberculosis (TB) outbreak by the persistent social inequalities in health. The less number of local health care professionals like doctors and the weak healthcare apparatus found in poor expedients settings.The modern computer enlargement strategies has corrected the recognition of TB testificanduming. In this paper, It offer a paperback plan of action using Convolutional Neural Network (CNN) to handle with um-balanced; less-category X-ray portrayals (data sets), by using CNN plan of action, our plan of action boost the efficiency and correctness for stratifying multiple TB demonstration by a large margin It traverse the effectiveness and efficiency of shamble with cross validation in instructing the network and discover the amazing effect in medical portrayal classification.This plan of actions and conclusions manifest a promising path for more accurate and quicker Tuberculosis healthcare facilities recognition.
Contact:
+91-98451 66723
+91-98866 92401
Abstract : Global health has been seriously threatened due to the rapid spread of the Coronavirus disease. In some cases, patients with high risk require early detection. Considering the less RT-PCR sensitivity as a screening tool, medical imaging techniques like computed tomography (CT) provide great advantages when compared. To reduce the fatality CT or X-ray image diagnosis plays an important role. To lessen the burden of radiologists in this global health crisis use of Computer-aided diagnosis is crucial. As a reason, automated image segmentation is also of great benefit for clinical resolution assistance in quantitative research and health monitoring.
Contact:
+91-98451 66723
+91-98866 92401
Abstract— Cardiovascular diseases (CVD) like stroke and heart attack have a higher risk to cause the life of human beings in heart diseases category. The challenge to predict cardiovascular diseases is high risky by using the hospitalized patients Intensive Care Unit (ICU) data. The recent technologies play a vital role in the medical industry to predict various health causes using various algorithms. Machine Learning (ML) technologies are one among them to predict the diseases using quality dataset and data analysis model. In this paper, we propose a model to find the similarity by using Hierarchical Random Forest Formation with Nonlinear Regression Model (HRFFNRM). By using this model, which produces 90.3% accurate prediction in cardiovascular diseases.
Contact:
+91-98451 66723
+91-98866 92401
Abstract — Primary sector yield is the division of GDP on which the Indian Economy depends vastly, agriculture being the major contributor. This is one of the reasons that securing this income is of the utmost importance since diseases can affect the yield. Automating the process of detecting plant diseases can help in identifying the diseases in early stages and reduce their effects on the yield. In this paper a comparison between five widely known and new pre-trained Convolutional Neural Network (CNN) architectures viz. GoogLeNet, ResNet, ShuffleNet, ResNeXt and Wide ResNet is made along with their ensemble with different weights with respect to the detection of three paddy leaf diseases viz Blast, Bacterial Leaf Blight (BLB) and Brownspot. The models were compared on performance parameters like loss and accuracy of trained model.
Contact:
+91-98451 66723
+91-98866 92401
ABSTRACT- COVID-19 outbreak has put the whole world in an unprecedented difcult situation bringing life around the world to a frightening halt and claiming thousands of lives. Due to COVID-19’s spread in 212 countries and territories and increasing numbers of infected cases and death tolls mounting to 5,212,172 and 334,915 (as of May 22 2020), it remains a real threat to the public health system. This paper renders a response to combat the virus through Articial Intelligence (AI). Some Deep Learning (DL) methods have been illustrated to reach this goal, including Generative Adversarial Networks (GANs), Extreme Learning Machine (ELM), and Long /Short Term Memory (LSTM). It delineates an integrated bioinformatics approach in which different aspects of information from a continuum of structured and unstructured data sources are put together to form the user-friendly platforms for physicians and researchers. The main advantage of these AI-based platforms is to accelerate the process of diagnosis and treatment of the COVID-19 disease. The most recent related publications and medical reports were investigated with the purpose of choosing inputs and targets of the network that could facilitate reaching a reliable Articial Neural Network-based tool for challenges associated with COVID-19. Furthermore, there are some specic inputs for each platform, including various forms of the data, such as clinical data and medical imaging which can improve the performance of the introduced approaches toward the best responses in practical applications.
Contact:
+91-98451 66723
+91-98866 92401
ABSTRACT—Object detection, which aims to automatically mark the coordinates of objects of interest in pictures or videos, is an extension of image classification. In recent years, it has been widely used in intelligent traffic management, intelligent monitoring systems, military object detection, and surgical instrument positioning in medical navigation surgery, etc. COVID-19, a novel coronavirus outbreak at the end of 2019, poses a serious threat to public health. Many countries require everyone to wear a mask in public to prevent the spread of coronavirus. To effectively prevent the spread of the coronavirus, we present an object detection method based on single-shot detector (SSD), which focuses on accurate and real-time face masks detection in the supermarket. We make contributions in the following three aspects: 1) presenting a lightweight backbone network for feature extraction, which based on SSD and spatial separable convolution, aiming to improve the detection speed and meet the requirements of real-time detection; 2) proposing a Feature Enhancement Module (FEM) to strengthen the deep features learned from CNN models, aiming to enhance the feature representation of the small objects; 3) constructing COVID-19- Mask, a large-scale dataset to detect whether shoppers are wearing masks, by collecting images in two supermarkets. The experiment results illustrate the high detection precision and realtime performance of the proposed algorithm.
Contact:
+91-98451 66723
+91-98866 92401
Abstract— Depression is a major depressive disorder. It is a psychological problem that hampers a person’s daily life and decreases productivity. It is becoming a severe problem in our world. People of any age can be depressed. Despite having a flourishing tech industry in Bangladesh, cases of depression among tech employees are highly seen, which eventually leads to an unwanted situation in an employee’s professional career and personal life. Depressed employees neither can concentrate on work nor are they able to be productive. This kind of incident is becoming common in the tech industry. For this reason, we are facing problem to achieve our desired goal in the tech industry. People can be depressed for various reasons. Many risk factors that contribute to depression include family problems, work pressure, lack of physical movement, drug abuse, etc. This research aims to predict the depression risk of a tech employee and determine the root cause of depression so that it becomes easier to treat depression at an early stage. To predict depression risk, the authors have used the Adaboosted decision tree. Using this Adaptive boosting technique on the standard decision tree approach, the authors have achieved better accuracy compared to the standard decision tree approach. In adaptive boosting, errors of previous models are corrected. The features of the dataset that were used to train and test the machine learning model was determined by a psychiatrist by rigorous analysis. The features were selected considering risk factors that contribute to depression. The data used for this research was collected under the direct supervision of a psychiatrist and technology expert. The Author’s primary purpose of this research is to predict depression at a preventive stage so that necessary measures can be taken to treat depression among tech employees and to avoid unwanted situations.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: A real-world animal biometric system that detects and describes animal life in image and video data is an emerging subject in machine vision. These systems develop computer vision approaches for the classification of animals. A novel method for animal face classification based on one of the popular convolutional neural network (CNN) features. We are using CNN which can automatically extract features, learn and classify them. The proposed method may also be used in other areas of image classification and object recognition. The experimental results show that automatic feature extraction in CNN is better than other simple feature extraction techniques (both local- and appearance-based features. The authors showed that the proposed technique have a positive effect on classification accuracy.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: The leaves of a plant provides the most important information or data which provides us to know which type of plant it is and which type of disease is infected on that leaf. The plants play an important role in the biological field. In this project we have describe the development of an Android application that gives users or farmers the capability to identify the plant leaf diseases based on the photographs of plant leaves taken from an android application. Detecting diseases on leaf of plant at early stages gives strength to overcome it and treat it appropriately by providing the details to the farmer that which prevention action should be taken.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: In medical diagnostic application, early defect detection is a crucial task as it provides critical insight into diagnosis. Medical imaging technique is actively developing field inengineering. Magnetic Resonance imaging (MRI) is one those reliable imaging techniques on which medical diagnostic is based upon. Manual inspection of those images is a tedious job as the amount of data and minute details are hard to recognize by the human. For this automating those techniques are very crucial. In this paper, we are proposing a method which can be utilized to make tumor detection easier. The MRI deals with the complicated problem of brain tumor detection. Due to its complexity and variance getting better accuracy is a challenge. Using Adaboost machine learning algorithm we can improve over accuracy issue. The proposed system consists of three parts such as Preprocessing, Feature extraction and Classification. Preprocessing has removed noise in the raw data, for feature extraction we used GLCM (Gray Level Co- occurrence Matrix) and for classification boosting technique used (Adaboost).
Contact:
+91-98451 66723
+91-98866 92401
Abstract: Deep learning has brought a series of breakthroughs in image processing. Specifically, there are significant improvements in the application of food image classification using deep learning techniques. However, very little work has been studied for the classification of food ingredients. Therefore, this paper proposes a new framework, called DeepFood which not only extracts rich and effective features from a dataset of food ingredient images using deep learning but also improves the average accuracy of multi-class classification by applying advanced machine learning techniques. First, a set of transfer learning algorithms based on Convolutional Neural Networks (CNNs) are leveraged for deep feature extraction. Then, a multi-class classification algorithm is exploited based on the performance of the classifiers on each deep feature set. The DeepFood framework is evaluated on a multi-class dataset that includes 41 classes of food ingredients and 100 images for each class. Experimental results illustrate the effectiveness of the DeepFood framework for multi-class classification of food ingredients. This model that integrates ResNet deep feature sets, Information Gain (IG) feature selection, and the SMO classifier has shown its supremacy for food-ingredients recognition compared to several existing work in this area.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: We consider the use of deep Convolutional Neural Networks (CNN) with transfer learning for the image classification and detection problems posed within the context of X-ray baggage security imagery. The use of the CNN approach requires large amounts of data to facilitate a complex end-to-end feature extraction and classification process. Within the context of Xray security screening, limited availability of object of interest data examples can thus pose a problem. To overcome this issue, we employ a transfer learning paradigm such that a pre-trained CNN, primarily trained for generalized image classification tasks where sufficient training data exists, can be optimized explicitly as a later secondary process towards this application domain. To provide a consistent feature-space comparison between this approach and traditional feature space representations, we also train Support Vector Machine (SVM) classifier on CNN features.We empirically show that fine-tuned CNN features yield superior performance to conventional hand-crafted features on object classification tasks within this context. Overall we achieve 0.994 accuracy based on AlexNet features trained with Support Vector Machine (SVM) classifier. In addition to classification, we also explore the applicability of multiple CNN driven detection paradigms such as sliding window based CNN (SW-CNN), Faster RCNN (F-RCNN), Region-based Fully Convolutional Networks (R-FCN) and YOLOv2. We train numerous networks tackling both single and multiple detections over SW-CNN/F-RCNN/RFCN/ YOLOv2 variants. YOLOv2, Faster-RCNN, and R-FCN provide superior results to the more traditional SW-CNN approaches. With the use of YOLOv2, using input images of size 544_544, we achieve 0.885 mean average precision (mAP) for a six-class object detection problem. The same approach with an input of size 416_416 yields 0.974 mAP for the two-class firearm detection problem and requires approximately 100ms per image. Overall we illustrate the comparative performance of these techniques and show that object localization strategies cope well with cluttered X-ray security imagery where classification techniques fail.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: facial expression analysis and recognition have been researched since the 17’th century. The foundational studies on facial expressions, which have formed the basis of today’s research, can be traced back to few centuries ago. Precisely, a detailed note on the various expressions and movements of head muscles was given in 1649 by John Bulwer(1). Another important milestone in the study of facial expressions and human emotions, is the work done by the psychologist Paul Ekman(2) and his colleagues. This important work have been done in the 1970s and has a significant importance and large influence on the development of modern day automatic facial expression recognizers. This work lead to adapting and developing the comprehensive Facial Action Coding System(FACS), which has since then become the de-facto standard for facial expression recognition. Over the last decades, automatic facial expressions analysis has become an active research area that finds potential applications in fields such as Human-Computer Interfaces (HCI), Image Retrieval, Security and Human Emotion Analysis. Facial
expressions are extremely important in any human interaction, and additional to emotions, it also reflects on other mental activities, social interaction and physiological signals. In this paper, we proposes an Artificial Neural Network (ANN) of two hidden layers, based on multiple Radial Bases Functions
Networks (RBFN’s) to recognize facial expressions. The ANN, is trained on features extracted from images by applying a multiscale and multi-orientation Gabor filters. We have considered the cases of subject independent/dependent facial expression recognition using The JAFFE and the CK+ benchmarks to evaluate the proposed model.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: In 1899, Galton first captured ink-on-paper fingerprints of a single child from birth until the age of 4.5 years, manually compared the prints, and concluded that “the print of a child at the age of 2.5 years would serve to identify hi ever after”. Since then, ink-on-paper fingerprinting and manual comparison methods have been superseded by digital capture and automatic fingerprint comparison techniques, but only a few feasibility studies on child fingerprint recognition have been conducted. Here, we present the first systematic and rigorous longitudinal study that addresses the following questions: (i) Do fingerprints of young children possess the salient features required to uniquely recognize a child? (ii) If so, at what age can a child’s fingerprints be captured with sufficient fidelity for recognition? (iii) Can a child’s fingerprints be used to reliably recognize the child as he ages? For our study, we collected fingerprints of 309 children (0-5 years old) four different times over a one year period. We show, for the first time, that fingerprints acquired from a child as young as 6 hours old exhibit distinguishing features necessary for recognition, and that state-of-the-art fingerprint technology achieves high recognition accuracy (98.9% true accept rate at 0.1% false accept rate) for children older than 6 months. Additionally, we use mixed-effects statistical models to study the persistence of child fingerprint recognition accuracy and show that the recognition accuracy is not significantly affected over the one year time lapse in our data. Given rapidly growing requirements to recognize children for vaccination tracking, delivery of supplementary food, and national identification documents, our study demonstrates that fingerprint recognition of young children (6 months and older) is a viable solution based on available capture and recognition technology.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: Malaria is a very serious infectious disease caused by a peripheral blood parasite of the genus Plasmodium. Conventional microscopy, which is currently “the gold standard” for malaria diagnosis has occasionally proved inefficient since it is time consuming and results are difficult to reproduce. As it poses a serious global health problem, automation of the evaluation process is of high importance. In this work, an accurate, rapid and affordable model of malaria diagnosis using stained thin blood smear images was developed. The method made use of the intensity features of Plasmodium parasites and erythrocytes.Images of infected and non-infected erythrocytes were acquired, pre-processed, relevant features extracted from them and eventually diagnosis was made based on the features extracted from the images. A set of features based on intensity have been proposed, and the performance of these features on the red blood cell samples from the created database have been evaluated using an artificial neural network (ANN) classifier. The results have shown that these features could be successfully used for malaria detection.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: This paper proposes a skin disease detection method based on image processing techniques. This method is mobile based and hence very accessible even in remote areas and it is completely noninvasive to patient’s skin. The patient provides an image of the infected area of the skin as an input to the prototype. Image processing techniques are performed on this image and the detected disease is displayed at the output. The proposed system is highly beneficial in rural areas where access to dermatologists is limited.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: Video OCR is a technique that can greatly help to locate the topics of interest in video via the automatic extraction and reading of captions and annotations. Text in video can provide key indexing information. Recognizing such text for search application is critical. Major difficult problem for character recognition for videos is degraded and deformated characters, low resolution characters or very complex background. To tackle the problem preprocessing on text image plays vital role. Most of the OCR engines are working on the binary image so to find a better binarization procedure for image to get a desired result is important. Accurate binarization process minimizes the error rate of video OCR.
Contact:
+91-98451 66723
+91-98866 92401
Abstract: Determining the material category of a surface from an image is a demanding task in perception that is drawing increasing attention. Following the recent remarkable results achieved for image classification and object detection utilizing Convolutional Neural Networks (CNNs), we empirically study material classification of everyday objects employing these techniques. More specifically, we conduct a rigorous evaluation of how state-of-the art CNN architectures compare on a common ground over widely used material databases. Experimental results on three challenging material databases show that the best performing CNN architectures can achieve mean average precision when classifying materials.
Contact:
+91-98451 66723
+91-98866 92401
Abstract : The high-resolution devices for image capturing and the high professional requirement for users, are very complex to extract features of the fruit fly image for classification. Therefore, a bi-linear CNN model based on the mid-level and high-level feature fusion (FB-CNN) is proposed for classifying the fruit fly image. At the first step, the images of fruit fly are blurred by the Gaussian algorithm, and then the features of the fruit fly images are extracted automatically by using CNN. Afterward, the mid- and high-level features are selected to represent the local and global features, respectively. Then, they are jointly represented. When finished, the FB-CNN model was constructed to complete the task of image classification of the fruit fly. Finally, experiments data show that the FB-CNN model can effectively classify four kinds of fruit fly images. The accuracy, precision, recall, and F1 score in testing dataset are 95.00%, respectively.
Abstract : Gender prediction accuracy increases as CNN architecture evolves. This paper proposes voting schemes to utilize the already developed CNN models to further improve gender prediction accuracy. Majority voting usually requires odd numbered models while proposed softmax based voting can utilize any number of models to improve accuracy. With experiments, it is shown that the voting of CNN models leads to further improvement of gender prediction accuracy and that softmax-based voters always show better gender prediction accuracy than majority voters though they consist of the same CNN models.
Abstract : In real-world applications such as emotion recognition from recorded brain activity, data are captured from electrodes over time. These signals constitute a multidimensional time series. In this paper, Echo State Network (ESN), a recurrent neural network with a great success in time series prediction and classification, is optimized with different neural plasticity rules for classification of emotions based on electroencephalogram (EEG) time series. Actually, the neural plasticity rules are a kind of unsupervised learning adapted for the reservoir, i.e. the hidden layer of ESN. More specifically, an investigation of Oja’s rule, BCM rule and gaussian intrinsic plasticity rule was carried out in the context of EEG-based emotion recognition. The study, also, includes a comparison of the offline and online training of the ESN. When testing on the well-known affective benchmark ”DEAP dataset” which contains EEG signals from 32 subjects, we find that pretraining ESN with gaussian intrinsic plasticity enhanced the classification accuracy and outperformed the results achieved with an ESN pretrained with synaptic plasticity. Four classification problems were conducted in which the system complexity is increased and the discrimination is more challenging, i.e. inter-subject emotion discrimination. Our proposed method achieves higher performance over the state of the art methods.
Abstract : Reversible data hiding (RDH) in color image is an important topic of data hiding. This paper presents an efficient RDH algorithm for color image via double-layer embedding. The key contribution is the proposed double-layer embedding technique based on histogram shifting (HS). This technique exploits image interpolation to generate prediction error matrices for HS in the first-layer embedding and uses local pixel similarity to calculate difference matrices for HS in the second-layer embedding. It inherits reversibility from HS and makes high embedding capacity due to the use of double layers in data embedding. In addition, interchannel correlation is incorporated into the first-layer embedding and the second-layer embedding for generating histograms with high peaks, so as to improve embedding capacity. Experiments with open standard datasets are done to validate performance of the proposed RDH algorithm. Comparison results show that the proposed RDH algorithm outperforms some state-of-the-art RDH algorithms in terms of embedding capacity and visual quality.
IEEE Python Image Processing Projects
IEEE Python Image Processing Projects
Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processing system includes treating images as two dimensional signals while applying already set signal processing methods to them. IEEE Python Image Processing Projects Click here.
Java Final year CSE projects in Bangalore
- Java Information Forensic / Block Chain B.E Projects
- Java Cloud Computing B.E Projects
- Java Big Data with Hadoop B.E Projects
- Java Networking & Network Security B.E Projects
- Java Data Mining / Web Mining / Cyber Security B.E Projects
- Java DataScience / Machine Learning B.E Projects
- Java Artificaial Inteligence B.E Projects
- Java Wireless Sensor Network B.E Projects
- Java Distributed & Parallel Networking B.E Projects
- Java Mobile Computing B.E Projects
Android Final year CSE projects in Bangalore
- Android GPS, GSM, Bluetooth & GPRS B.E Projects
- Android Embedded System Application Projetcs for B.E
- Android Database Applications Projects for B.E Students
- Android Cloud Computing Projects for Final Year B.E Students
- Android Surveillance Applications B.E Projects
- Android Medical Applications Projects for B.E
MatLab Final year CSE projects in Bangalore
- Matlab Image Processing Projects for B.E Students
- MatLab Wireless Communication B.E Projects
- MatLab Communication Systems B.E Projects
- MatLab Power Electronics Projects for B.E Students
- MatLab Signal Processing Projects for B.E
- MatLab Geo Science & Remote Sensors B.E Projects
- MatLab Biomedical Projects for B.E Students