Техника слияния мультифокальных изображений с использованием сверточной нейронной сети
Аннотация
Image fusion is defined as the process of collecting all the important information from multiple images and their inclusion in fewer images, usually in one image. This single image is more informative and accurate than any single source image and also contains all the required information. Image fusion is one of the most important techniques in medical image processing, and it includes developing software to integrate multiple sets of data into the same site; It is one of the new fields adopted in solving medical image problems and producing high-quality images that contain more information for interpretation, classification, segmentation and compression.
In this thesis, a solution to the problems faced by different images such as multifocal and medical images is found through a simulation process using brain magnetic resonance imaging (MRI) to make a fuse based on previously approved fusion techniques such as convolutional neural networks (CNN) and in the experiment, an algorithm is developed CNN with the introduction of the Euclidean distance algorithm as part of the processes to make the implementation faster and more efficient than the traditional CNN.The objective fusion metrics that are commonly used in multimodal medical image fusion are implementing to make a quantitative evaluation. Peak Signal-to-Noise Ratio (PSNR) and Processing Time (PT). The proposed system consists of three main phases which are, pre-processing phase, features extraction phase, and classification phase. The preprocessing phase is used to enhance the images by using the techniques of digital image processing. Feature extraction phase is used to get features from medical images based on the concept of Histogram of Orientation Gradient (HOG) technique feature that applied to the medical image after conversion using (mean filter, adaptive filter, Discrete Wavelet Transform (DWT), k-means clustering Singular Value Decomposition (k-SVD).
A neural network technique with the Euclidean distance algorithm is used in training and testing as a classifier of the medical images in the classification phase. The network is trained using a set of training samples, and then the generated weights are used to test the system's recognition ability with new test images. Image fusion system (IFS) was tested using a standard dataset. This dataset contains brain MRI images with manual anomaly fragmentation masks for attenuated fluid recovery (FLAIR). The images were obtained from the Cancer Imaging Archive (TCIA). It is a set of 3,740 different brain image samples corresponding to (110) patients, Each patient contains (20-70) segmentation of a brain tumor in the MRI images included in the low-grade glioma group Cancer Genome Atlas (TCGA) with at least FLAIR sequencing and data Genomic block available.The results of experiments conducted showed that the use of convolutional neural networks (CNNs) with the Euclidean distance algorithm used in training and testing as a classifier for medical images provides approximate accuracy (98.18%). Comparing with the findings of other published works these rates are considered high.
In this thesis, a solution to the problems faced by different images such as multifocal and medical images is found through a simulation process using brain magnetic resonance imaging (MRI) to make a fuse based on previously approved fusion techniques such as convolutional neural networks (CNN) and in the experiment, an algorithm is developed CNN with the introduction of the Euclidean distance algorithm as part of the processes to make the implementation faster and more efficient than the traditional CNN.The objective fusion metrics that are commonly used in multimodal medical image fusion are implementing to make a quantitative evaluation. Peak Signal-to-Noise Ratio (PSNR) and Processing Time (PT). The proposed system consists of three main phases which are, pre-processing phase, features extraction phase, and classification phase. The preprocessing phase is used to enhance the images by using the techniques of digital image processing. Feature extraction phase is used to get features from medical images based on the concept of Histogram of Orientation Gradient (HOG) technique feature that applied to the medical image after conversion using (mean filter, adaptive filter, Discrete Wavelet Transform (DWT), k-means clustering Singular Value Decomposition (k-SVD).
A neural network technique with the Euclidean distance algorithm is used in training and testing as a classifier of the medical images in the classification phase. The network is trained using a set of training samples, and then the generated weights are used to test the system's recognition ability with new test images. Image fusion system (IFS) was tested using a standard dataset. This dataset contains brain MRI images with manual anomaly fragmentation masks for attenuated fluid recovery (FLAIR). The images were obtained from the Cancer Imaging Archive (TCIA). It is a set of 3,740 different brain image samples corresponding to (110) patients, Each patient contains (20-70) segmentation of a brain tumor in the MRI images included in the low-grade glioma group Cancer Genome Atlas (TCGA) with at least FLAIR sequencing and data Genomic block available.The results of experiments conducted showed that the use of convolutional neural networks (CNNs) with the Euclidean distance algorithm used in training and testing as a classifier for medical images provides approximate accuracy (98.18%). Comparing with the findings of other published works these rates are considered high.