Bio-Medical Materials and Engineering - Volume 24, issue 6
Purchase individual online access for 1 year to this journal.
Price: EUR 245.00
Impact Factor 2024: 1.0
The aim of
Bio-Medical Materials and Engineering is to promote the welfare of humans and to help them keep healthy. This international journal is an interdisciplinary journal that publishes original research papers, review articles and brief notes on materials and engineering for biological and medical systems.
Articles in this peer-reviewed journal cover a wide range of topics, including, but not limited to: Engineering as applied to improving diagnosis, therapy, and prevention of disease and injury, and better substitutes for damaged or disabled human organs; Studies of biomaterial interactions with the human body, bio-compatibility, interfacial and interaction problems; Biomechanical behavior under biological and/or medical conditions; Mechanical and biological properties of membrane biomaterials; Cellular and tissue engineering, physiological, biophysical, biochemical bioengineering aspects; Implant failure fields and degradation of implants. Biomimetics engineering and materials including system analysis as supporter for aged people and as rehabilitation; Bioengineering and materials technology as applied to the decontamination against environmental problems; Biosensors, bioreactors, bioprocess instrumentation and control system; Application to food engineering; Standardization problems on biomaterials and related products; Assessment of reliability and safety of biomedical materials and man-machine systems; and Product liability of biomaterials and related products.
Abstract: While the abdominal adipose tissue has been identified as an important pathomarker for the cardiometabolic syndrome in adults, the relationships between the cardiometabolic risk factors and abdominal adipose morphology or physical performance levels have not been examined in children with obesity. Therefore, the specific aim of this study was to investigate the relationships between risk factors (BMI and physical activity levels and abdominal fat layers including subcutaneous, intra-abdominal preperitoneal and mesenteric fat thickness in children with obesity. 30 children with obesity (mean±SD = 10.0±4.5 yrs; 9 girls; BMI > 20) underwent physical performance (curl-ups, sit and reach, push-ups, and a…400-m run), ultrasound measurement of thickness of fat composition of the abdomen, blood pressure, oxygen consumption. Pearson correlation analysis showed significant correlations, ranging from -0.523- 0.898 between the intra-abdominal adipose tissue thickness, cardiometabolic risk factors (BMI, blood pressure, heart rate), and the curl-up physical performance test. In conclusion, the present study provides a compelling evidence that the intra-abdominal adipose tissue morphological characteristics were associated with BMI, physical performance, and most importantly cardiometabolic risk factors (blood pressure and heart rate), which eventually contribute to the development of cardiometabolic syndrome in adulthood.
Show more
Abstract: Ultrasound elastography has been widely applied in clinical diagnosis. To produce high-quality elastograms, displacement estimation is important to generate ne displacement map from the original ratio-frequency signals. Traditional displacement estimation methods are based on the local information of signals pair, such as cross-correlation method, phase zero estimation. However, the tissue movement is nonlocal during realistic elasticity process due to the compression coming from the surface. Recently, regularized cost functions have been broadly used in ultrasound elastography. In this paper, we tested the using of analytic minimization of adaptive regularized cost function, a combination of different regularized cost functions, to correct…the displacement estimation calculated by cross-correlation method directly or by lateral displacement guidance. We have demonstrated that the proposed method exhibit obvious advantages in terms of imaging quality with higher levels of elastographic signal-to-noise ratio and elastographic contrast-to-noise ratio in the simulation and phantom experiments respectively.
Show more
Abstract: Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have…a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver.
Show more
Keywords: Attenuation imaging, least squares method, fatty liver, quantitative ultrasound
Abstract: Currently, placental maturity staging is mainly based on subjective observation of the physician. To address this issue, a new method is proposed for automatic staging of placental maturity based on B-mode ultrasound images. Due to small variations in the placental images, dense descriptor is utilized in place of the sparse descriptor to boost performance. Dense sampled DAISY descriptor is investigated for the demonstrated scale and translation invariant properties. Moreover, the extracted dense features are encoded by vector locally aggregated descriptor (VLAD) for performance boosting. The experimental results demonstrate an accuracy of 0.874, a sensitivity of 0.996 and a specificity of…0.874 for placental maturity staging. The experimental results also show that the dense features outperform the sparse features.
Show more
Abstract: Blood-Brain Barrier (BBB) can be opened locally, noninvasively and reversibly by low frequency focused ultra-sound (FUS) in the presence of microbubbles. In this study, Evans blue (EB) dye extravasation across BBB was enhanced by 1 MHz FUS at acoustic pressure of 0.35MPa in the presence of microbubbles at clinically comparable dosage. The spatial distribution of EB extravasation was visualized using fluorescence imaging method. The center region of BBB disruption area showed more enhanced fluorescence signal than the surrounding region in general. However, EB dye deposition was heterogeneous in the center region. The findings in this study indicated potential use of…fluorescence imaging to evaluate large molecules delivery across BBB.
Show more
Abstract: One of the major problems for computer-aided pulmonary nodule detection in chest radiographs is that a high falsepositive (FP) rate exists. In an effort to overcome this problem, a new method based on the MTANN (Massive Training Artificial Neural Network) is proposed in this paper. An MTANN comprises a multi-layer neural network where a linear function rather than a sigmoid function is used as its activity function in the output layer. In this work, a mixture of multiple MTANNs were employed rather than only a single MTANN. 50 MTANNs for 50 different types of FPs were prepared firstly. Then, several…effective MTANNs that had higher performances were selected to construct the MTANNs mixture. Finally, the outputs of the multiple MTANNs were combined with a mixing neural network to reduce various different types of FPs. The performance of this MTANNs mixture in FPs reduction is validated on three different versions of commercial CAD software with a validation database consisting of 52 chest radiographs. Experimental results demonstrate that the proposed MTANN approach is useful in cutting down FPs in different CAD software for detecting pulmonary nodules in chest radiographs.
Show more
Keywords: False Positive, cutting down, Mixture of MTANNs, Commercial CAD
Abstract: To address the lack of 3D space information in the digital radiography of a patient femur, a pose estimation method based on 2D–3D rigid registration is proposed in this study. The method uses two digital radiography images to realize the preoperative 3D visualization of a fractured femur. Compared with the pure Digital Radiography or Computed Tomography imaging diagnostic methods, the proposed method has the advantages of low cost, high precision, and minimal harmful radiation. First, stable matching point pairs in the frontal and lateral images of the patient femur and the universal femur are obtained by using the Scale Invariant…Feature Transform method. Then, the 3D pose estimation registration parameters of the femur are calculated by using the Iterative Closest Point (ICP) algorithm. Finally, based on the deviation between the six degrees freedom parameter calculated by the proposed method, preset posture parameters are calculated to evaluate registration accuracy. After registration, the rotation error is less than l.5°, and the translation error is less than 1.2 mm, which indicate that the proposed method has high precision and robustness. The proposed method provides 3D image information for effective preoperative orthopedic diagnosis and surgery planning.
Show more
Abstract: Magnetic detection electrical impedance tomography (MDEIT) is an imaging modality that aims to reconstruct the cross-sectional conductivity distribution of a volume from the magnetic flux density surrounding an object. The MDEIT inverse problem is inherently ill-posed, necessitating the use of regularization. The most commonly used L2 norm regularizations generate the minimum energy solution, which blurs the sharp variations of the reconstructed image. Consequently, this paper presents the total variation (TV) regularization to preserve discontinuities and piecewise constancy of the MDEIT reconstructed image. The primal dual-interior point method (PD-IPM) is employed for minimizing the TV penalty in this paper. The…proposed method is validated by MDEIT simulated data. In comparison with the performance of L2 norm regularization, the results show that TV regularized algorithm produces sharper images and has better robustness to noise. The TV regularized algorithm preserves local smoothness and piecewise constancy, leading to improvements in the localization of the reconstructed conductivity images in MDEIT.
Show more
Keywords: Magnetic detection electrical impedance tomography, inverse problem, regularization, total variation, primal du- al-interior point method
Abstract: A generalized relative quality (RQ) assessment scheme is proposed here based on the Bayesian inference theory, which is reasonable to make use of full reference (FR) algorithms when the evaluation of the quality of homogeneous medical images is required. Each FR algorithm is taken as a kernel to represent the level of quality. Although, various kernels generate different order of magnitude, a normalization process can rationalize the quality index within 0 and 1, where 1 represent the highest quality and 0 represents the lowest quality. To validate the performance of the proposed scheme, a series of reconstructed susceptibility weighted imaging…images are collected, where each image has its subjective scale. Both experimental results and a ROC analysis show that the RQ obtained from the proposed scheme is consistent with subjective evaluation.
Show more
Abstract: Sleep apnea is often diagnosed using an overnight sleep test called a polysomnography (PSG). Unfortunately, though it is the gold standard of sleep disorder diagnosis, a PSG is time consuming, inconvenient, and expensive. Many researchers have tried to ameliorate this problem by developing other reliable methods, such as using electrocardiography (ECG) as an observed signal source. Respiratory rate interval, ECG-derived respiration, and heart rate variability analysis have been studied recently as a means of detecting apnea events using ECG during normal sleep, but these methods have performance weaknesses. Thus, the aim of this study is to classify the subject into…normal- or apnea-subject based on their single-channel ECG measurement in regular sleep. In this proposed study, ECG is decomposed into five levels using wavelet decomposition for the initial processing to determine the detail coefficients (D3–D5) of the signal. Approximately 15 features were extracted from every minute of ECG. Principal component analysis and a support vector machine are used for feature dimension reduction and classification, respectively. According to classification that been done from a data set consisting of thirty-five patients, the proposed minute-to-minute classifier specificity, sensitivity, and subject-based classification accuracy are 95.20%, 92.65%, and 94.3%, respectively. Furthermore, the proposed system can be used as a basis for future development of sleep apnea screening tools.
Show more
Keywords: Apnea, wavelet decomposition, principal component analysis, support vector machine, electrocardiogram
Abstract: Feature extraction is a crucial aspect of computer-aided arrhythmia diagnosis using an electrocardiogram (ECG). A location, width and magnitude (LWM) model is proposed for extracting each wave's features in the ECG. The model is a stream of Gaussian function in which three parameters (the expected value, variance and amplitude) are applied to approximate the P wave, QRS wave and T wave. Moreover, the features such as the P–Q intervals, S–T intervals, and so on are easily obtained. Then, a mixed approach is presented for estimating the parameters of a real ECG signal. To illustrate this model's associated advantages, the extracted…parameters combined with R–R intervals are fed to three classifiers for arrhythmia diagnoses. Two kinds of arrhythmias, including the premature ventricular contraction (PVC) heartbeats and the atrial premature complexes (APC) heartbeats, are diagnosed from normal beats using the data from the MIT–BIH arrhythmia database. The results in this study demonstrate that using these parameters results in more accurate and universal arrhythmia diagnoses.
Show more
Abstract: Information regarding the motion, strain and synchronization are important for cardiac diagnosis and therapy. Extraction of such information from ultrasound images remains an open problem till today. In this paper, a novel method is proposed to extract the boundaries of left ventricles and track these boundaries in ultrasound image sequences. The initial detection of boundaries was performed by an active shape model scheme. Subsequent refinement of the boundaries was done by using local variance information of the images. The main objective of this paper is the formulation of a new boundary tracking algorithm using ant colony optimization technique. The experiments…conducted on the simulated image sequences and the real cardiac ultrasound image sequences shows a positive and promising result.
Show more
Keywords: Active shape model, image segmentation, boundary tracking, ant colony optimization, motion estimation
Abstract: Steady-state visual evoked potentials (SSVEP) are the visual system responses to a repetitive visual stimulus flickering with the constant frequency and of great importance in the study of brain activity using scalp electroencephalography (EEG) recordings. However, the reference influence for the investigation of SSVEP is generally not considered in previous work. In this study a new approach that combined the canonical correlation analysis with infinite reference (ICCA) was proposed to enhance the accuracy of frequency recognition of SSVEP recordings. Compared with the widely used periodogram method (PM), ICCA is able to achieve higher recognition accuracy when extracts frequency within a…short span. Further, the recognition results suggested that ICCA is a very robust tool to study the brain computer interface (BCI) based on SSVEP.
Show more
Keywords: Steady-state visual evoked potentials, canonical correlation analysis with infinite reference, frequency recognition, periodogram, brain computer interface
Abstract: Continuous monitoring of stroke volume (SV) or cardiac output (CO) has long been the subject of numerous studies. The majority of existing methods are calibration-dependent, requiring invasive measurements of CO to initialize the estimation algorithms, thus limiting their application to the clinical setting. In the present study, a new calibration-free method aimed at home-based use has been developed, which allows noninvasive estimation of SV from oscillometric signals measured at the wrist. The estimation equation was constructed based on the PRAM method, with significant modifications to incorporate more patient-specific information. Furthermore, the estimation equation was optimized based on the clinical data…acquired from 96 patients (the ‘Training’ group) to obtain the best comparison of estimated SV with echocardiographic SV. The resulting estimation equation was then applied directly to another patient group (the ‘Testing’ group) to examine its validity. Obtained results demonstrate that our estimations correlated closely with the measurements in both patient groups. In addition to being noninvasive and calibration-free, the proposed method can be fully automated, which may be valuable for the future development of home-based cardiac monitoring systems.
Show more
Abstract: Recently, the integration of different electrophysiological signals into an electroencephalogram (EEG) has become an effective approach to improve the practicality of brain-computer interface (BCI) systems, referred to as hybrid BCIs. In this paper, a hybrid BCI was designed by combining an EEG with electrocardiograph (EOG) signals and tested using a target selection experiment. Gaze direction from the EOG and the event-related (de)synchronization (ERD/ERS) induced by motor imagery from the EEG were simultaneously detected as the output of the BCI system. The target selection mechanism was based on the synthesis of the gaze direction and ERD activity. When an ERD activity…was detected, the target corresponding to the gaze direction was selected; without ERD activity, no target was selected, even when a subjects gaze was directed at the target. With this mechanism, the operation of the BCI system is more flexible and voluntary. The accuracy and completion time of the target selection tasks during the online testing were 89.3% and 2.4 seconds, respectively. These results show the feasibility and practicality of this hybrid BCI system, which can potentially be used in the rehabilitation of disabled individuals.
Show more
Abstract: Generally, an alcoholic's brain shows explicit damage. However, in cognitive tasks, the correlation between the topological structural changes of the brain networks and the brain damage is still unclear. Scalp electrodes and synchronization likelihood (SL) were applied to the constructions of the EGG functional networks of 28 alcoholics and 28 healthy volunteers. The graph-theoretic analysis showed that in cognitive tasks, compared with the healthy control group, the brain networks of alcoholics had smaller clustering coefficients in β1 bands, shorter characteristic path lengths, increased global efficiency, but similar small-world properties. The abnormal topological structure of the alcoholics may be related to…the local-function brain damage and the compensation mechanism adopted to complete tasks. This conclusion provides a new perspective for alcoholrelated brain damage.
Show more
Keywords: alcoholic, EEG, brain functional network, graph theory
Abstract: Electroencephalograph (EEG) signals feature extraction and processing is one of the most difficult and important part in the brain-computer interface (BCI) research field. EEG signals are generally unstable, complex and have low signal-noise ratio, which is difficult to be analyzed and processed. To solve this problem, this paper disassembles EEG signals with the empirical mode decomposition (EMD) algorithm, extracts the characteristic values of the major intrinsic mode function (IMF) components, and then classifies them with fuzzy C-means (FCM) method. Also, comparison research is done between the proposed method and several current EEG classification methods. Experimental results show that the classification…accuracy based on the EEG signals of the second BCI competition in 2003 is up to 78%, which is superior to those of the comparative EEG classification methods.
Show more
Abstract: Diffusion tensor imaging (DTI) is a tractography algorithm that provides the only means of mapping white matter fibers. Furthermore, because of its wealth of applications, diffusion MRI tractography is gaining importance in clinical and neuroscience research. This paper presents a novel brain white matter fiber reconstruction method based on the snake model by minimizing the energy function, which is composed of both external energy and internal energy. Internal energy represents the assembly of the interaction potential between connected segments, whereas external energy represents the differences between predicted DTI signals and measured DTI signals. Through comparing the proposed method with other…tractography algorithms in the Fiber Cup test, the present method was shown to perform superiorly to the majority of the other methods. In fact, the proposed test performed the third best out of the ten available methods, which demonstrates that present method can accurately formulate the brain white matter fiber reconstruction.
Show more
Keywords: Diffusion tensor imaging, brain white matter, fiber tracking, snake model, energy minimization
Abstract: This paper reviewed the meaning of the statistic index and the properties of the complex network models and their physiological explanation. By analyzing existing problems and construction strategies, this paper attempted to construct complex brain networks from a different point of view: that of clustering first and constructing the brain network second. A clustering-guided (or led) construction strategy towards complex brain networks was proposed. The research focused on the discussion of the task-induced brain network. To discover different networks in a single run, a combined-clusters method was applied. Afterwards, a complex local brain network was formed with a complex network…method on voxels. In a real test dataset, it was found that the network had small-world characteristics and had no significant scale-free properties. Meanwhile, some key bridge nodes and their characteristics were identified in the local network by calculating the betweenness centrality.
Show more
Abstract: Statistical model is essential for constraint-free visual image reconstruction, as it may overfit training data and have poor generalization. In this study, we investigate the sparsity of the distributed patterns of visual representation and introduce a suitable sparse model for the visual image reconstruction experiment. We use elastic net regularization to model the sparsity of the distributed patterns for local decoder training. We also investigate the relationship between the sparsity of the visual representation and sparse models with different parameters. Our experimental results demonstrate that the sparsity needed by visual reconstruction models differs from the sparsest one, and the l2-norm…regularization introduced in the EN model improves not only the robustness of the model but also the generalization performance of the learning results. We therefore conclude that the sparse learning model for visual image reconstruction should reflect the spasity of visual perceptual experience, and have a solution with high but not the highest sparsity, and some robustness as well.
Show more
Keywords: sparse learning model, visual image reconstruction, sparsity, elastic net
Abstract: This study investigated neuronal activation differences under two conditions: driving only and distracted driving. Driving and distraction tasks were performed using a Magnetic Resonance (MR)-compatible driving simulator with a driving wheel and pedal. The experiment consisted of three blocks, and each block had both a Rest phase (1 min) and a Driving phase (2 min). During the Rest phase, drivers were instructed to simply look at the stop screen without performing any driving tasks. During the Driving phase, each driver was required to drive at 110 km/h under two conditions: driving only and driving while performing additional distraction tasks. The…results show that the precuneus, inferior parietal lobule, supramarginal gyrus, middle frontal gyrus, cuneus, and declive are less activated in distracted driving than in driving only. These regions are responsible for spatial perception, spatial attention, visual processing and motor control. However, the cingulate gyrus and sub-lobar regions (lentiform nucleus and caudate), which are responsible for error monitoring and control of unnecessary movement, show increased activation during distracted driving compared with driving only.
Show more
Abstract: Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians…or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.
Show more
Abstract: In the neural science society, multi-subject brain decoding is of great interest. However, due to the variability of activation patterns across brains, it is difficult to build an effective decoder using fMRI samples pooled from different subjects. In this paper, a hierarchical model is proposed to extract robust features for decoding. With feature selection for each subject treated as a separate task, a novel multi-task feature selection method is introduced. This method utilizes both complementary information among subjects and local correlation between brain areas within a subject. Finally, using fMRI samples pooled from all subjects, a linear support vector machine…(SVM) classifier is trained to predict 2-D stimuli-related images or 3-D stimuli-related images. The experimental results demonstrated the effectiveness of the proposed method.
Show more
Abstract: Functional Magnetic Resonance Imaging (fMRI) and Diffusion Tensor Imaging (DTI) are a source of information to study different pathologies. This tool allows to classify subjects under study, analysing in this case, the functions related to language in young patients with dyslexia. Images are obtained using a scanner and different tests are performed on subjects. After processing the images, the areas that are activated by patients when performing the paradigms or anatomy of the tracts were obtained. The main objective is to ultimately introduce a group of monocular vision subjects, whose brain activation model is unknown. This classification helps to assess…whether these subjects are more akin to dyslexic or control subjects. Machine learning techniques study systems that learn how to perform non-linear classifications through supervised or unsupervised training, or a combination of both. Once the machine has been set up, it is validated with the subjects who have not been entered in the training stage. The results are obtained using a user-friendly chart. Finally, a new tool for the classification of subjects with dyslexia and monocular vision was obtained (achieving a success rate of 94.8718% on the Neuronal Network classifier), which can be extended to other further classifications.
Show more
Abstract: A study of the motor cortex during the programming, execution and mental representation of voluntary movement is of great relevance; its evaluation in conditions close to reality is necessary, given the close integration of the visuomotor, sensory feedback and proprioceptive systems, as of yet, a functional Magnetic Resonance Imaging (fMRI) scanner allowing a human subject to maintain erect stance, observe the surroundings and conserve limb freedom is still a dream. The need for high field suggests a solenoid magnet geometry that forces an unnatural posture that affects the results, particularly when the motor cortex is investigated. In contrast in a…motor functional study, the scanner should allow the subject to sit or stand, with unobstructed sight and unimpeded movement. Two approaches are presented here to solve this problem. In the first approach, an increased field intensity in an open magnet is obtained lining the “back wall” of the cavity with a sheet of current: this boosts the field intensity at the cost of the introduction of a gradient, which has to be canceled by the introduction of an opposite gradient; The second approach is an adaptation of the “double doughnut” architecture, in which the cavity widens at the center to provide additional room for the subject. The detailed design of this kind of structure has proven the feasibility of the solution.
Show more
Keywords: Functional magnetic resonance, cerebral activation, motor cortex, driving simulator, magnet design, MRI scanner
Abstract: Fractional anisotropy (FA) is currently an ideal index capable of reflecting the white matter structure. 1 H magnetic resonance spectroscopy (1 H-MRS) is often used as a noninvasive concentration measurement of important neurochemicals in vivo. This study was conducted to investigate the relationship between FA and metabolite concentrations by comparing 1 H-MRS of bilateral medium corona radiata in healthy adults. The data of diffusion tensor imaging (DTI) and 1 H-MRS were acquired from 31 healthy adults using a 3.0 T MR system. All subjects were divided into three groups: the total group (mean age=42 years), the junior group (mean age=29…years) and the senior group (mean age=56 years). There was a negative correlation between FA and age in three groups (r=-0.146, r=-0.204, r=-0.162, p<0.05). The positive correlation of FA with corresponding concentrations of N-acetylaspartate (NAA) was significant in three groups (r=0.339, r=0.213, r=0.430, respectively, p<0.05). The positive correlation of FA with the corresponding NAA/Cr was only significant difference between the total 353 samples and the junior group (r=0.166, r=0.305, respectively, p<0.05). Combining 1 H-MRS with DTI reveals the relationship between structure and metabolic characteristics of white matter.
Show more
Keywords: Diffusion tensor imaging (DTI), magnetic resonance spectroscopy (MRS), white matter
Abstract: A crucial role during the implementation of volume visualization is to identify the optimal transfer function, since the vital information and structure can be highlighted and revealed. The boundary of the volume is shared by respective portion of the two materials formed out of it, which causes undesirable thickening and ambiguity of the boundary explored via traditional LH (Low and High) histogram. To address this issue, initially a modified LH histogram construction method is introduced to intuitively and conveniently visualize cardiac volume for user interaction. Subsequently, the f-LH histogram is presented to further identify and visualize each portion of the…boundary accurately. An appropriate multidimensional transfer function generation is proposed by using variables in f-LH space and spatial information, for visualizing the multi-boundary cardiac volume data.
Show more
Keywords: Interactive visualization, partial effect, multidimensional transfer function, multi-boundary data
Abstract: The portable visible and near-infrared (NIR) imaging equipment for a pre-clinical test with small animals was designed and developed in this paper. The developed equipment is composed of a CCD camera, a focusing lens, an objective lens, a NIR band pass filter and a NIR filter driving motor. An NIR ray is mainly used for imaging equipment because it has high light penetration depth in biological tissue. Therefore, NIR fluorescent agents are available for chemical conjugation to targeting molecules in vivo. This equipment can provide a visible image, NIR image and merged image simultaneously. A communication system was specifically established…to check obtained images through a smart pad in real time. It is less dependent on space and time than the conventional system.
Show more
Abstract: A phase retrieval method is introduced in quantitative phase imaging (QPI) based on two-step phase-shifting technique. By acquiring two measured interferograms and calculating the addition and subtraction between them, the quantitative phase information can be directly retrieved. This method is illustrated by both theory and simulation experiments of a ball. The results of the simulation and the experiment of the red blood cell show a good agreement, demonstrating its application for studying cells.
Abstract: For improving the detection of micro-calcifications (MCs), this paper proposes an automatic detection of MC system making use of multi-fractal spectrum in digitized mammograms. The approach of automatic detection system is based on the principle that normal tissues possess certain fractal properties which change along with the presence of MCs. In this system, multi-fractal spectrum is applied to reveal such fractal properties. By quantifying the deviations of multi-fractal spectrums between normal tissues and MCs, the system can identify MCs altering the fractal properties and finally locate the position of MCs. The performance of the proposed system is compared with the…leading automatic detection systems in a mammographic image database. Experimental results demonstrate that the proposed system is statistically superior to most of the compared systems and delivers a superior performance.
Show more
Abstract: The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds…of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.
Show more
Abstract: The goal of this study is to capture the quantitative optical features of degenerative finger joints based on x-ray aided three-dimensional (3D) diffuse optical tomography (DOT). It is anticipated that the fused imaging technique can be applied to identifying the significant differences between osteoarthritis (OA) and psoriatic arthritis (PA). For a case study, total 6 subjects were selected to study the distal interphalangeal (DIP) finger joints. 2 OA patients, 2 PA patients and 2 healthy subjects were examined clinically first. Their DIP finger joints were then scanned by the multimodality imaging method. Our findings suggested that the developed multimodality imaging…approach may aid to contradistinguish OA patients from PA patients with the healthy control, which is essential for a better diagnosis and treatment of inflammatory arthritis in humans.
Show more
Abstract: An achromatic element eliminating only longitudinal chromatic aberration (LCA) while maintaining transverse chromatic aberration (TCA) is established for the eye model, which involves the angle formed by the visual and optical axis. To investigate the impacts of higher-order aberrations on vision, the actual data of higher-order aberrations of human eyes with three typical levels are introduced into the eye model along visual axis. Moreover, three kinds of individual eye models are established to investigate the impacts of higher-order aberrations, chromatic aberration (LCA+TCA), LCA and TCA on vision under the photopic condition, respectively. Results show that for most human eyes, the…impact of chromatic aberration on vision is much stronger than that of higher-order aberrations, and the impact of LCA in chromatic aberration dominates. The impact of TCA is approximately equal to that of normal level higher-order aberrations and it can be ignored when LCA exists.
Show more
Abstract: Although considerable attention has been paid to the cognitive structure of humor, its emotional structure tends to be overlooked. Humor is often associated with the single emotion of mirth or amusement, while other aspects of its rich emotional structure are ignored. The purpose of the present study was to explore this structure by analyzing the content of a Taiwanese corpus of 204 ‘negative’ jokes to identify the basic emotion was induced and the emotional shift pattern of the joke. Additionally, the corpus might be used to compare emotional reversal jokes (negative to positive emotion) and regular jokes (neutral to positive…emotion) as an aid when preparing materials for use in functional Magnetic Resonance Imaging (fMRI) investigations on the neural substrates of humor. In terms of basic emotions, 82 fear jokes, 61 disgust jokes, 42 sadness jokes and 19 anger jokes were found. The most common type of emotional shift was from negative to positive, with the punch lines of 114 jokes providing relief from the negative emotion by either diverting attention away from it or dissolving it entirely.
Show more
Abstract: This paper aimed to evaluate the prognostic value of maximum standardized uptake value (SUVmax) and metabolic tumor volume (MTV) of the primary tumor on 18 F-FDG PET/CT scan in early stage non-small cell cancer (NSCLC) patients without lymph node (LN) metastasis. In the experiment, eighty NSCLC patients pathologically staged as T1N0 or T2N0 were included (M:F=50:30; mean age, 64.8 years). All patients had preoperative 18 F-FDG PET/CT scan and curative surgery. FDG uptake in the primary tumor was measured by SUVmax and MTV with various SUV threshold values. SUVmax, MTV of the primary tumor, age, tumor size, histology and differentiation…grade were analyzed for association with disease-free survival (DFS). The experimental results showed that the histology types included adenocarcinoma (n=58), squamous cell carcinoma (n=20), and others (n=2); Twenty-two (27.5%) of the 80 patients had a recurrence during follow-up at a median time of 29.1 months; The median SUVmax was 5.26, and the median MTV2.5 was 2.2 cm3 . Univariate analysis showed higher SUVmax (>4), greater MTV (MTV2.5 >4 cm3 ), and non-squamous histology were significantly associated with shorter period DFS (p=0.001, p=0.030 and p<0.001). In multivariate analysis, higher SUVmax (p=0.004) and adenocarcinoma histology (p=0.005) were associated with shorter DFS. Therefore, high SUVmax (>4) of the primary tumor on preoperative 18 F-FDG PET/CT scan is an independent prognostic factor of shorter DFS in early stage of NSCLC.
Show more
Abstract: X-ray phase contrast computed tomography (CT) uses the phase shift that x-rays undergo when passing through matter, rather than their attenuation, as the imaging signal and may provide better image quality in soft-tissue and biomedical materials with low atomic number. Here a geometry-constraint-scan imaging technique for in-line phase contrast micro-CT is reported. It consists of two circular-trajectory scans with x-ray detector at different positions, the phase projection extraction method with the Fresnel free-propagation theory and the filter back-projection reconstruction algorithm. This method removes the contact-detector scan and the pure phase object assumption in classical in-line phase contrast Micro-CT. Consequently it…relaxes the experimental conditions and improves the image contrast. This work comprises a numerical study of this technique and its experimental verification using a biomedical composite dataset measured at an x-ray tube source Micro-CT setup. The numerical and experimental results demonstrate the validity of the presented method. It will be of interest for a wide range of in-line phase contrast Micro-CT applications in biology and medicine.
Show more
Abstract: To build a patient specific respiratory motion model with a low dose, a novel method was proposed that uses a limited number of 3D lung CT volumes with an external respiratory signal. 4D lung CT volumes were acquired for patients with in vitro labeling on the upper abdominal surface. Meanwhile, 3D coordinates of in vitro labeling were measured as external respiratory signals. A sequential correspondence between the 4D lung CT and the external respiratory signal was built using the distance correlation method, and a 3D displacement for every registration control point in the CT volumes with respect to time can…be obtained by the 4D lung CT deformable registration. A temporal fitting was performed for every registration control point displacements and an external respiratory signal in the anterior-posterior direction respectively to draw their fitting curves. Finally, a linear regression was used to fit the corresponding samples of the control point displacement fitting curves and the external respiratory signal fitting curve to finish the pulmonary respiration modeling. Compared to a B-spline-based method using the respiratory signal phase, the proposed method is highly advantageous as it offers comparable modeling accuracy and target modeling error (TME); while at the same time, the proposed method requires 70% less 3D lung CTs. When using a similar amount of 3D lung CT data, the mean of the proposed method's TME is smaller than the mean of the PCA (principle component analysis)-based methods' TMEs. The results indicate that the proposed method is successful in striking a balance between modeling accuracy and number of 3D lung CT volumes.
Show more
Abstract: Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%.…Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.
Show more
Keywords: Positron emission tomography, depth of interaction, GATE simulation
Abstract: Positron emission tomography (PET) has been widely used in early diagnosis of tumors. Though standardized uptake value (SUV) is a common diagnosis index for PET, it will be affected by the size of the tumor. To explore how the tumor size affects imaging diagnosis index, dynamic PET images were simulated to study the relationship between tumor size and the imaging diagnosis index. It was found that the SUV of the region of the tumor varied with scan time, and the SUV was always lower than the true value of tumor. Even more deviations were found in SUV with a reduced…tumor size. The diagnosis index SUVmax was more reliable than SUV, for it declined only when the volume of tumor was less than 3 mm3 . Therefore, the effect of tumor size on the SUV and SUVmax that are used as diagnosis indices in the early diagnosis of tumors should not be neglected.
Show more
Abstract: Extraction of lung tumors is a fundamental step for further quantitative analysis of the tumor, but is challenging for juxta-pleural tumors due to the adhesion to the pleurae. An automatic algorithm for segmentation of juxta-pleural tumors based on the analysis of the geometric and morphological features was proposed. Initially, the lung is extracted by means of thresholding using 2D Otsu's method. Next a center point is suggested to find a starting point and endpoint of outward facing pleura. A model based on the variation of incline angle was adopted to identify potentially affected regions, and to full segment juxta-pleural tumors.…The results were compared with the manual segmentation by two radiologists. Averaged for ten experimental datasets, the accuracy calculated by Dice index between the results of the algorithm and by the two radiologists is 91.2%. It indicates the proposed method has comparable accuracy with the experts (the inter-observer variability is 92.4%), but requests much less manual interactions. The proposed algorithm can be used for segmenting juxta-pleural tumors from CT images, and help improve the diagnosis, pre-operative planning and therapy response evaluation.
Show more
Abstract: In medical image segmentation, manual segmentation is considered both labor- and time-intensive while automated segmentation often fails to segment anatomically intricate structure accordingly. Interactive segmentation can tackle shortcomings reported by previous segmentation approaches through user intervention. To better reflect user intention, development of suitable editing functions is critical. In this paper, we propose an interactive knee cartilage extraction software that covers three important features: intuitiveness, speed, and convenience. The segmentation is performed using multi-label random walks algorithm. Our segmentation software is simple to use, intuitive to normal and osteoarthritic image segmentation and efficient using only two third of manual segmentation's…time. Future works will extend this software to three dimensional segmentation and quantitative analysis.
Show more
Keywords: Interactive segmentation, Knee cartilage, Magnetic resonance image, Random walks, User interface
Abstract: Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an…optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.
Show more
Abstract: Lung vessels often interfere with the detection of lung nodules. In this paper, a novel computer-aided lung nodule detection scheme on vessel segmentation is proposed. This paper describes an active contour model which can combine image region mean gray value and image edge energy. It is used to segment and remove lung vessels. A selective shape filter based on Hessian Matrix is used to detect suspicious nodules and remove omitted lung vessels. This paper extracts density, shape and position features of suspicious nodules, and uses a Rule-Based Classification (RBC) method to identify true positive nodules. In the experiment results, the…detection sensitivity is about 90% and FP is 1/scan.
Show more
Abstract: Plaque assaying, measurement of the number, diameter, and area of plaques in a Petri dish image, is a standard procedure gauging the concentration of phage in biology. This paper presented a novel and effective method for implementing automatic plaque assaying. The method was mainly comprised of the following steps: In the training stage, after pre-processing the images for noise suppression, an initial training set was readied by sampling positive (with a plaque at the center) and negative (plaque-free) patches from the training images, and extracting the HOG features from each patch. The linear SVM classifier was trained in a self-learnt…supervised learning strategy to avoid possible missing detection. Specifically, the training set which contained positive and negative patches sampled manually from training images was used to train the preliminary classifier which exhaustively searched the training images to predict the label for the unlabeled patches. The mislabeled patches were evaluated by experts and relabeled. And all the newly labeled patches and their corresponding HOG features were added to the initial training set to train the final classifier. In the testing stage, a sliding-window technique was first applied to the unseen image for obtaining HOG features, which were inputted into the classifier to predict whether the patch was positive. Second, a locally adaptive Otsu method was performed on the positive patches to segment the plaques. Finally, after removing the outliers, the parameters of the plaques were measured in the segmented plaques. The experimental results demonstrated that the accuracy of the proposed method was similar to the one measured manually by experts, but it took less than 30 seconds.
Show more
Keywords: Plaque assay, HOG, SVM, local adaptive image segmentation
Abstract: The optic disc (OD) is one of the important anatomic structures on the retina, the changes of which shape and area may indicate disease processes, thus needs computerized quantification assistance. In this study, we proposed a self-adaptive distance regularized level set evolution method for OD segmentation without the periodically re-initializing steps in the level set function execution to a signed distance function during the evolution. In that framework, preprocessing of an image was performed using Fourier correlation coefficient filtering to obtain initial boundary as the beginning contour, then, an accurate boundary of the optic disc was obtained using the self-adaptive…distance regularized level set evolution method. One hundred eye fundus color numerical images from public database were selected to validate our algorithm. Therefore, we believe that such automatic OD segmentation method could assist the ophthalmologist to segment OD more efficiently, which is of significance for future computer-aided early detection of glaucoma and retinopathy diseases.
Show more
Keywords: Optic disk, retinal imaging, level set evolution, imaging informatics
Abstract: Surface registration is widely used in image-guided neurosurgery to achieve spatial registration between the patient space and the image space. Coarse registration, followed by fine registration, is an important premise to ensure the robustness and efficiency of surface registration. In this paper, a coarse registration algorithm based on the principal axes is proposed to achieve this goal. The extraction of the principal axes relies on the approximated surface with an adaptive Gaussian kernel, the width of which is consistent with neighborhood relation so that it is applicable for various scanning data. Determining the corresponding centers of translation is another problem…for aligning different scanning data, which is solved through heuristics. Six pairs of points on two surfaces with the farthest projections on the principal axes were regarded as the candidates of translation centers, and then through tentative alignments of local regions around them, a pair of candidates with the minimum registration error was selected as the optimal translation centers. Automatic registration of two scans of a head phantom is presented in this paper. Experimental results confirmed the robustness of the algorithm and its feasibility in clinical applications.
Show more
Abstract: To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is…described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.
Show more
Keywords: Image denoising, BEMD, self-adaption, Gaussian white noise, energy
Abstract: It has been demonstrated that shape, area and depth of the optic disc are relevant indices of diabetic retinopathy. In this paper, we present a new fundus optic disc localization and segmentation method based on phase congruency (PC). Firstly, in order to highlight the optic disc, channel images with the highest contrast between optic disc and background are selected in LAB, YUV, YIQ and HSV spaces respectively. Secondly, with the use of PC, features of four selected channel images can be extracted. Multiplication operation is then used to enhance PC detection results. Thirdly, window scanning and gray accumulating are utilized…to locate the optic disc. Finally, iterative OTSU automatic threshold segmentation and Hough transform are performed on location images, before the final optic disc segmentation result can be obtained. The experimental results showed that the proposed method can effectively and accurately perform optic disc location and segmentation.
Show more
Abstract: Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such…as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.
Show more
Keywords: Medical image, superpixels, SLIC, image segmentation, performance evaluation
Abstract: The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information…is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Show more