You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

COV-ELM classifier: An extreme learning machine based identification of COVID-19 using chest X-ray images

Abstract

Coronaviruses constitute a family of viruses that gives rise to respiratory diseases. COVID-19 is an infectious disease caused by a newly discovered coronavirus also termed Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). As COVID-19 is highly contagious, early diagnosis of COVID-19 is crucial for an effective treatment strategy. However, the reverse transcription-polymerase chain reaction (RT-PCR) test which is considered to be a gold standard in the diagnosis of COVID-19 suffers from a high false-negative rate. Therefore, the research community is exploring alternative diagnostic mechanisms. Chest X-ray (CXR) image analysis has emerged as a feasible and effective diagnostic technique towards this objective. In this work, we propose the COVID-19 classification problem as a three-class classification problem to distinguish between COVID-19, normal, and pneumonia classes. We propose a three-stage framework, named COV-ELM based on extreme learning machine (ELM). Our dataset comprises CXR images in a frontal view, namely Posteroanterior (PA) and Erect anteroposterior (AP). Stage one deals with preprocessing and transformation while stage two deals with feature extraction. These extracted features are passed as an input to the ELM at the third stage, resulting in the identification of COVID-19. The choice of ELM in this work has been motivated by its faster convergence, better generalization capability, and shorter training time in comparison to the conventional gradient-based learning algorithms. As bigger and diverse datasets become available, ELM can be quickly retrained as compared to its gradient-based competitor models. We use 10-fold cross-validation to evaluate the results of COV-ELM. The proposed model achieved a macro average F1-score of 0.95 and the overall sensitivity of 0.94 ± 0.02 at a 95% confidence interval. When compared to state-of-the-art machine learning algorithms, the COV-ELM is found to outperform its competitors in this three-class classification scenario. Further, LIME has been integrated with the proposed COV-ELM model to generate annotated CXR images. The annotations are based on the superpixels that have contributed to distinguish between the different classes. It was observed that the superpixels correspond to the regions of the human lungs that are clinically observed in COVID-19 and Pneumonia cases.

1.Introduction

Coronavirus disease 2019 (COVID-19), known to originate from Wuhan City in Hubei Province, China is a contagious infection resulting in respiratory illness in most cases. COVID-19 is caused by a novel coronavirus, widely recognized as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2; previously known as 2019-nCoV) [1]. As the COVID-19 outbreak has become a global health emergency, on March 11, 2020, the WHO declared COVID-19 a global pandemic [2]. Moreover, COVID-19 disease shares similar characteristics as observed in other forms of viral or bacterial Pneumonia, making it difficult to separate between the two classes at the early stages. Thus, early accurate diagnosis of COVID-19 is critically important to contain the spread and the treatment of the affected subjects.

The reverse transcription-polymerase chain reaction (RT-PCR) test is popularly used for the detection of SARS-CoV-2. Although COVID-19 may be asymptotic in several instances, it has been reported that even many symptomatic cases showing characteristics of COVID-19 were not correctly diagnosed by RT-PCR test [3]. This has led to the search for alternative mechanisms that may be more accurate in the identification of COVID-19 disease. Traditionally, chest X-ray images (CXRs) have been the popular choice for diagnosis and treatment of respiratory disorders such as Pneumonia [4, 5]. As a result, several research groups are working on developing models based on CXR images [6, 7, 8, 9]. However, most of them are struggling with the challenge to distinguish COVID-19 patients against those suffering from other forms of pneumonia [10].

Although deep neural networks have emerged as a popular tool for image-based analysis, these require tuning millions of parameters and search for the optimal value of hyper-parameters [7, 11, 12, 13, 14, 15]. Also, it is well known that the training of a deep neural network is a time-consuming task even on high-performance computing platforms.

Khan et al. [7] proposed a deep convolutional neural network (DCNN) model to automate the detection of COVID-19 based on chest X-ray images. The model is based on Xception architecture [16] pre-trained on ImageNet [17] and achieved an overall accuracy of 89.6%. Jain et al. [18] proposed a deep residual network for the automatic detection of COVID-19 in CXR image by differentiating it with the CXR images of bacterial pneumonia, viral pneumonia, and normal cases and exhibited an accuracy of 93.01% in differentiating three classes using their first-stage model. They have further analyzed the CXR images showing the viral pneumonia features for the identification of COVID-19 case in their second stage model showing an exceptional performance with an accuracy of 97.22%. Altan et al. [19] used an efficient hybrid model consisting of two-dimensional (2D) curvelet transformation for the feature extraction, chaotic salp swarm algorithm (CSSA) to optimize the feature matrix, and EfficientNet-B0 model for the identification of COVID-19 cases. The model achieved an accuracy of 99.69%. Mahmud et al. [8] proposed a DCNN model using a variation in dilation rate to extract distinguishing features from chest X-ray images and achieved an accuracy of 90.2% for multi-class classification (COVID-19/Normal/Pneumonia). They also used Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the abnormal regions in CXR scans. Wang et al. [9] developed a computer-aided screening tool for detection of COVID-19 from CXR images based on a pre-trained network on ImageNet, tuned with the Adam optimizer, and achieved 91% sensitivity for the COVID-19 class. Basu et al. [20] used domain extension transfer learning (DETL) framework comprising 12 layers. They used an already-trained network on the National Institutes of Health (NIH) CXR image dataset [4] (comprising 108,948 frontal view X-ray images of 32,717 unique patients) which was fine-tuned for the COVID-19 dataset to obtain an overall accuracy 95.3% ± 0.02 on 5-fold cross-validation. Marques et al. [21] made a novel attempt of applying EfficientNet [22] (claimed to achieve an accuracy of 84.3% top-1 accuracy on ImageNet) and evaluated their model using 10-fold stratified cross-validation method. 1092 samples have been used for training, and 122 images have been used for testing. They have achieved an average F1-score value of 0.97 in multi-class scenarios whereas 0.99 in the case of binary classification. Rajaraman et al. [23] iteratively pruned the task-specific models (VGG-16, VGG-19, and Inception-V3) by pruning 2% of the neurons in each convolutional layer and retrained the model to obtain a macro averaged F1-score of 0.99. Das et al. [11] proposed a deep transfer learning approach for automated detection of COVID-19 disease. The network is fed with the features extracted using the Xception network. They obtained 97% sensitivity for classifying COVID-19 cases from Pneumonia and respiratory diseases. They further show that their proposed model outperformed other popular deep networks such as VGGNet, ResNet50, AlexNet, GoogLeNet.

Khuzani et al. [10] used multilayer neural networks (MLNN) to distinguish the CXR images of COVID-19 patients from other forms of pneumonia. They extracted a set of spatial and frequency domain features from X-ray images. Based on the evaluation of extracted features, they concluded that while Fast Fourier Transform (FFT) features were best suited in detecting the COVID-19, the normal class was best determined by the gray level difference method (GLDM). Principal Component Analysis (PCA) was applied to generate an optimized set of synthetic features that served as input to an MLNN to distinguish COVID-19 images from the non-COVID-19 ones with an accuracy of 94%. Rasheed et al. [24] applied PCA as a feature extraction technique resulting in 148 features. Further to investigate the suitability of the reduced feature set, CNN and logistic regression (LR) based models were developed to distinguish between COVID-19 and healthy cases using 250 CXR images belonging to each class. Accuracy of 100% and 97.6% for CNN and LR-based models respectively was reported.

It is evident from the above discussion that so far the research groups have mainly focused on the use of deep neural networks which require millions of parameters and the optimal choice of hyper-parameters. However, it is well known that the training of a deep neural network is a time-consuming task even on high-performance computing platforms. Therefore, in order to improve the computational efficiency of the classification models, in this work, we have proposed the use of a single hidden layer feed-forward neural network (SLFN) known as extreme learning machine (ELM) [25, 26]. The ELM is a batch learning algorithm proposed by Huang et al. [25] and has been used extensively in different domains like ECG signal classification [27] and identification of arrhythmia disease [28]. The ELM and its variants have also been applied in applications such as fingerprint identification [29], lung cancer detection [30], image and video watermarking [31, 32], and 3D object recognition [33]. Govindarajan and Swaminathan [34] present a comparison of ELM and online-sequential ELM (OS-ELM) in the classification of tuberculosis from healthy subjects using CXR images. They have performed feature extraction using median robust extended local binary patterns and gradient local ternary patterns. ELM achieved a sensitivity value equal to 98.7% while OS-ELM performed better with a sensitivity value of 99.3%. Ismael and Şengür [35] present ELM based binary classification model that uses multi-resolution approaches such as wavelet, shearlet, and contourlet transform for decomposition of CXR images. Features are extracted based on entropy and the normalized energy approaches. Using the ELM classifier, the sensitivity values obtained for wavelet, shearlet, and contourlet transforms are 96.07%, 98.89%, and 87.82%, respectively. Thus, ELM is popularly applied in several domains due to its fast learning capability good generalization performance, and ease of implementation.

The main contribution of this paper is to explore the suitability of ELM in the diagnosis of COVID-19 using CXR images. The faster convergence of ELM with only one tunable parameter made it more efficient as compared to conventional gradient-based learning algorithms. Another challenge addressed in this work is the identification of localized patterns to differentiate between the classes, namely, COVID-19, Pneumonia, and Normal. Further, to clinically establish the relevance of COV-ELM results, LIME has been integrated with it to generate annotated CXR images. These annotations represent regions that distinguish between the different classes.

The rest of the paper is organized as follows: section3 gives the dataset description followed by the detailed methodology, preprocessing of the dataset, review of Extreme Learning Machine, outcomes of the experiments, and analysis of the results have been discussed in section4. Also, visualizations of COV-ELM results using LIME have been discussed in LIME. Finally, the conclusions and scope for future work are discussed in section5.

2.Material and methods

In this section, we present a list of CXR image datasets used for experimentation in this work, followed by details of the proposed methodology.

2.1Dataset description

In the present work, we have used the following publicly available CXR datasets for COVID-19, Normal, and Pneumonia.

  • COVID-19 Image Data Collection [6]. It comprises 760 samples, COVID-19: 538, ARDS: 14, Other Diseases: 222.

  • COVID-19 Radiography Database (Kaggle) [36]. It comprises 2905 samples, COVID-19: 219, Normal: 1341, Viral Pneumonia: 1345.

  • Mendeley Chest X-ray Images [37]. It comprises 5856 samples, Pneumonia (Viral and Bacterial): 4273, Normal:1583.

Figure 1.

Manually annotated CXR images highlighting the regions of interest that distinguishes between COVID-19 and pneumonia cases. The above regions are marked by a radiologist after clinical evaluation of these CXR images.

Manually annotated CXR images highlighting the regions of interest that distinguishes between COVID-19 and pneumonia cases. The above regions are marked by a radiologist after clinical evaluation of these CXR images.

Figure 2.

COV-ELM framework: Dataset preprocessing, feature extraction, and ELM based classification model.

COV-ELM framework: Dataset preprocessing, feature extraction, and ELM based classification model.

In this work, we only consider the CXR images in a frontal view, namely Poster anterior (PA) and Erect anteroposterior (AP). The first two databases in the above list comprise 520 such images. For the training purpose, we have used these images along with 520 CXR images of normal and pneumonia cases from COVID-19 Radiography Database (Kaggle) [36] and Mendeley Chest X-ray Images [37]. Figure 1a and b depicts the manually marked region of interest that distinguishes between COVID-19 and Pneumonia cases in CXR images. The above regions are marked by a radiologist after clinical evaluation of these CXR images.

2.2Preprocessing

Due to diversity in the CXR image collection, they are resized and subjected to min-max normalization [38] to ensure uniformity. Further, to enhance the local contrast in the CXRs, Contrast Limited Adaptive Histogram Equalization (CLAHE), a variant of adaptive histogram equalization is applied. GraphicalAbstract depicts the framework of the three-staged proposed model. In stage one, the preprocessing includes resizing, normalization, and CLAHE [39] applied in the sequence shown. The preprocessed CXRs are passed to stage 2 of the framework for feature extraction.

2.3Feature extraction

Texture plays a significant role in the identification of the region of interest (ROI) and classification of images [40]. In this stage, we consider two types of features: texture and frequency-based as shown in Fig. 2. The texture features consisted of four groups. The first group of features is directly generated from the preprocessed image of 512 × 512. These include area, mean, standard deviation, skewness, kurtosis, energy, entropy, max, min, mean absolute deviation, median, range, root mean square, and uniformity. Remaining texture features are obtained by applying gray-level co-occurrence matrix (GLCM) [41, 42], histogram of oriented gradients (HOG) [43, 44, 45], and gray-level difference matrix (GLDM) [46, 10]. Apart from texture features, the use of frequency features also plays an important role in developing efficient classifiers in medical imaging [47, 48, 49]. In the present work, the frequency features are extracted using Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT). Zargari et al. [50] used the aforementioned statistical features for predicting chemotherapy response in ovarian cancer patients. Drawing inspiration from their work, we computed these features for the FFT map and three-level (LL3) DWT coefficients to generate a vector of frequency features. Finally, a vector of features is obtained by concatenating the textural feature vector of length (140) with the frequency vector of length (28) to generate a vector of size 168 for each CXR image.

2.4Extreme learning machine

In stage three, the features extracted at stage 2 are passed as input to the Extreme Learning Machine (ELM) based classification model as shown in Fig. 2. ELM was proposed by Huang et al. as an efficient alternative to the backpropagation algorithm for single-layer feed-forward networks (SNFN) [25]. It is a fast learning algorithm with good generalization performance as compared to other traditional feed-forward networks. An ELM works by initializing a set of weights randomly and computing the output weights analytically by Moore-Penrose Matrix Inverse [51]. ELMarch depicts the overall ELM architecture and the details of its functioning are provided in elm_algo.

Figure 3.

ELM Architecture: The ELM network comprises an input layer, a hidden layer, and an output layer.

ELM Architecture: The ELM network comprises an input layer, a hidden layer, and an output layer.

Given a training set (xj,tj),xj𝐑n, tj𝐑m for j=1,2,,N, where the pairs (xj,tj) denote the training vectors and the corresponding target values, following [25], the standard ELM having L nodes is modeled as:

(1)
i=1Lβigi(ai.xj+bi)=tj

In Eq. (1), ai denotes the weight vector that connects the input layer to the ith hidden node and bi denotes the corresponding bias. Further, βi denotes the weight vector connecting the ith hidden node and the output neurons. The above N equations may also be represented as:

(2)
Gβ=T

The form of the hidden layer output matrix G, mentioned in Eq. (2), is given in Eq. (3). The form of vectors β and T is given in Eq. (4).

(3)
G=[g(a1.x1+b1)g(aL.x1+bL)g(a1.xN+b1)g(aL.xN+bL)]N×L
(4)
β=[β1TβLT]L×m and T=[t1TtNT]N×m

The solution of the above system of linear equations is obtained using Moore-Penrose generalized inverse (Eq. (5)).

(5)
β=GT

In Eq. (5), G=(GTG)-1GT denotes the Moore-Penrose generalized inverse [51] of matrix G.

: ELM Algorithm

  • Input:

  • Training set: (xj,tj),xj𝐑n, tj𝐑m for j=1,2,,N

  • Activation function: g:𝐑𝐑

  • Number of hidden nodes: L

  • Output:

  • Optimized weight matrix: β

    • (a) Randomly assign hidden node parameters (ai,bi),i=1,2,,L;

    • (b) Compute the hidden-layer output matrix G;

    • (c) Compute output weight vector β=GT

Huang et al. [52] argue that ELM outperforms the conventional learning algorithms in terms of learning speed, and in most of the cases shows better generalization capability than the conventional gradient-based learning algorithms such as backpropagation where the weights are adjusted with a non-linear relationship between the input and the output [51]. They further stated that ELM can compute the desired weights of the network in a single step in comparison to classical methods.

2.5COV-ELM

In this work, we use ELM discussed in ELM to develop an ELM classifier (COV-ELM) for the detection of COVID-19 in CXR images. Based on experimentation, we used L2-normalized radial basis function (rbf-l2) activation function. We also experimented with the different number of neurons in the hidden layer. Using 10-fold cross-validation, we observed that with an increase in the number of neurons in the hidden layer, accuracy increases up to L=140 neurons, and the highest 10-fold cross-validation accuracy of 94.74% was reached when the number of hidden neurons was L=350. Experimenting with different seeds, we found the peak accuracy was reached for the number of hidden neurons in the range 350 to 380 but without any further increase in 10-fold cross-validation accuracy. So, for further experiments, we fixed the number of hidden neurons as L=350.

Figure 4.

Effect of the increase in the number of hidden neurons (L) on 10-fold cross-validation accuracy. Accuracy increases with increase in L upto L=140, and witnessed highest 10-fold cross-validation accuracy of 94.74% at L=350.

Effect of the increase in the number of hidden neurons (L) on 10-fold cross-validation accuracy. Accuracy increases with increase in L upto L=140, and witnessed highest 10-fold cross-validation accuracy of 94.74% at L=350.

Boxplot in Fig. 5 depicts the variation in sensitivity value. It is evident from the results that the texture features score over frequency features. We also examined the influence of a combined set of features (168) on the classification process. It may be noted that the model yields median sensitivity of 0.945 using the combined set of features which scores over the median sensitivity values considering the frequency and texture features separately, exhibiting 0.90 and 0.93 respectively.

Figure 5.

Boxplot for sensitivity (recall) values using frequency features, texture features, and combined set of frequency and texture features. The combined set of features depicts the median sensitivity of 0.945 which scores over the median values considering frequency and texture features separately.

Boxplot for sensitivity (recall) values using frequency features, texture features, and combined set of frequency and texture features. The combined set of features depicts the median sensitivity of 0.945 which scores over the median values considering frequency and texture features separately.

3.Results and discussion

Figure 6.

AUC is near unity for each of the three classes namely COVID-19, normal, and pneumonia in one vs all setting.

AUC is near unity for each of the three classes namely COVID-19, normal, and pneumonia in one vs all setting.

Figure 7.

The classification error in classifying COVID-19, normal, and pneumonia is 4.62%, 5%, and 6.16% respectively and the macro average of f1-score is 0.95.

The classification error in classifying COVID-19, normal, and pneumonia is 4.62%, 5%, and 6.16% respectively and the macro average of f1-score is 0.95.

We have carried out all the experiments using Python 3.6.9 on the NVIDIA Tesla K80 GPU provided by Google Colaboratory. To evaluate the performance of the proposed method for the three-class classification problem, we trained the model on the CXR dataset using 10-fold cross-validation. Following Handy and Till [53], we depict the receiver operating characteristic (ROC) curves for each of the three classes, namely COVID-19, Normal, and Pneumonia for one fold (please see in Fig. 6). It is apparent from the ROC curves that AUC is near unity for all three classes which shows a good generalization performance of COV-ELM.

To evaluate the performance of the proposed classifier, we carried out 10-fold cross-validation. confuHeatmap depicts the confusion matrix and the heatmap for 10-fold cross-validation. The results of the 10-fold cross-validation are summarized in a confusion matrix (Fig. 7a). It shows that out of 520 COVID-19 patients, 496 were correctly identified, eleven were misclassified as normal and thirteen were labeled as pneumonia. Similarly, pneumonia and normal subjects were also labeled by the system quite accurately. Thus, we obtained an overall accuracy of 94.74% and a high recall rate of 95.38%, 95.00%, and 93.84% for COVID-19, Normal, and Pneumonia classes respectively. The macro average of the f1-score is 0.95 as depicted in the heatmap (Fig. 7b). As shown in tableRecall, COV-ELM identified COVID-19, Normal, and Pneumonia classes with sensitivity 0.95 ± 0.04, 0.95 ± 0.01, and 0.94 ± 0.03 respectively at 95% confidence interval.

Table 1

Sensitivity (recall) values for COVID-19, normal, and pneumonia at 95% confidence interval

Sensitivity at 95% CI
COVID-19NormalPneumonia
0.95 ± 0.040.95 ± 0.010.94 ± 0.03

To establish the effectiveness of our approach, the COV-ELM is compared with the state-of-the-art machine learning algorithms, namely support vector classifier (SVC) using rbf and linear kernels, gradient boosting classifier (GBC), random forest ensemble (RBE), artificial neural networks (ANN), decision tree classifier (DTC), and voting classifier (VC) ensemble of (logistic regression (LR), SVC, and GBC) in terms of sensitivity at 95% confidence interval (CI) (please see tableComparison). It is clear that COV-ELM has higher sensitivity as compared to its competitors. It is evident from the table that the proposed approach achieves a sensitivity of 0.94 ± 0.02 and accuracy of 0.94 ± 0.03 which scores over other state-of-the-art classifiers.

Table 2

Comparison of COV-ELM with other state-of-the-art classifiers in terms of sensitivity and accuracy values at 95% confidence interval

ClassifierSensitivityAccuracy
ELM (L = 350, rbf-l2) 0.94±0.02 0.94±0.03
GBC (learning rate = 1.0) 0.91±0.05 0.91±0.04
SVC (C = 1.0, kernel = ‘rbf’) 0.86±0.06 0.86±0.05
SVC (C = 1.0, kernel = ‘linear’) 0.90±0.05 0.90±0.06
RBE (min_samples_split = 2) 0.89±0.05 0.89±0.04
ANN (23,747 Parameters) 0.85±0.08 0.85±0.07
DTC (min_samples_leaf = 1) 0.82±0.07 0.82±0.06
VC (LR, SVC, GBC) 0.89±0.05 0.89±0.06

Recently, Saygılı Ahmet [54] proposed the use of machine learning techniques such as bag of tree, kernel ELM (K-ELM), k-nearest neighbor (k-NN), and SVC to detect COVID-19 cases using CXR images. tableASC shows a comparison between the aforementioned work [54] and the proposed approach (COV-ELM).

Table 3

Comparison of COV-ELM with the recently proposed approach by Saygılı Ahmet [54] for the detection of COVID-19 using CXR images

Dataset usedTechniqueCOVID-19 sensitivity (%)

Proposed (COV-ELM)
COVID-19: 520
Normal: 520
Pneumonia: 520

ELM (L = 350, rbf-l2) 94.74

Bag of tree
(# of trees = 100)

71.20

Saygılı Ahmet [54]
COVID-19: 125
Normal: 500
Pneumonia: 500

K-ELM
(L = 4096, rbf,
C=1e-1)

88.00

k-NN (k = 1,
Minkowski distance)

94.40

SVC (Default)

88.80

4.Visualization using LIME

In order to corroborate the COV-ELM results with clinical findings, we have used a recently proposed AI tool – Local Interpretable Model-agnostic Explanations (LIME) [55]. LIME perturbs an input image and helps in analyzing the effect of these perturbations on the predictions of a given machine learning model.

Figure 8.

(a)–(c) corresponding to COVID-19, Pneumonia, normal cases, respectively. In each row – the first CXR image depicts the clinically evaluated and manually marked regions, second CXR image highlights the top 10 superpixels obtained using LIME, and the third image is the LIME generated heatmap corresponding to the second image.

(a)–(c) corresponding to COVID-19, Pneumonia, normal cases, respectively. In each row – the first CXR image depicts the clinically evaluated and manually marked regions, second CXR image highlights the top 10 superpixels obtained using LIME, and the third image is the LIME generated heatmap corresponding to the second image.

limecxrimagesa–c shows images relating to COVID-19, Pneumonia, normal cases, respectively. Each subfigure in a row comprises three images of the same patient relating to a medical condition. In each row, the clinical condition has been marked by a radiologist in the first image. In the second image in the same row, the top 10 superpixels obtained using LIME have been marked using green and red colors. Superpixels contributing toward and against the predicted class appear in green and red colors, respectively. Finally, the third image in the same row depicts the LIME-generated heatmap corresponding to the second image. The intensity of the blue color of a particular region in the heatmap corresponds to its relative significance in predicting its class. A radiologist confirmed that in the case of Anteroposterior (AP) chest radiograph (Fig. 8a), the ill-defined area of ground glass haze in the right lung parenchyma at mid-zone likely represents COVID-19. Similarly, in the Anteroposterior (AP) chest radiograph (Fig. 8b), the wedge-shaped area of consolidation in the right lung parenchyma at the upper zone likely represents pneumonia. The radiologist confirmed that the regions (though not all) highlighted by LIME correspond to the affected regions in case of both COVID-19 and Pneumonia. This points to the applicability of COV-ELM in the identification of medical conditions such as pneumonia and COVID-19.

5.Conclusions

The current research is focused on the accurate diagnosis of COVID-19 with high sensitivity. This paper evaluates the suitability of ELM for COVID-19 classification due to its faster convergence, better generalization capability, and shorter training time. A combination of texture (Spatial, GLDM, HOG, AND GLDM) and frequency features (FFT and DWT) extracted from publicly available CXR image repositories are provided as an input to COV-ELM. The proposed COV-ELM model achieved a macro average f1-score of 0.95 and an overall accuracy of 94.74% in the present three-class classification scenario. The COV-ELM outperforms other competitive machine learning algorithms with a sensitivity of 0.94% ± 0.02 at a 95% confidence interval. For visualization of the results, LIME has been used to highlight the superpixels that contributed to the prediction of a given class. In the LIME generated heatmaps, the higher intensity regions correspond to the clinically evaluated regions. This establishes the clinical relevance of the features generated by the proposed model. Further, the training time of COV-ELM being quite low, it can be efficiently retrained on newer bigger and diverse datasets. As part of future work, we would like to investigate how segmentation of the relevant lung regions influences the performance of a classification model.

References

[1] 

Ng MY, Lee EY, Yang J, Yang F, Li X, Wang H, et al. Imaging profile of the COVID-19 infection: Radiologic findings and literature review. Radiology: Cardiothoracic Imaging. 2020; 2(1): e200034.

[2] 

WHO. Archived: WHO Timeline – COVID-19; 2020. https://www.who.int/news-room/detail/27-04-2020-who-timeline—covid-19.

[3] 

Tahamtan A, Ardebili A. Real-time RT-PCR in COVID-19 detection: issues affecting the results. Taylor & Francis; 2020.

[4] 

Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. pp. 2097–2106.

[5] 

Nanni L, Lumini A, Brahnam S. Local binary patterns variants as texture descriptors for medical image analysis. Artificial Intelligence in Medicine. 2010; 49(2): 117–125.

[6] 

Cohen JP, Morrison P, Dao L, Roth K, Duong TQ, Ghassemi M. COVID-19 Image Data Collection: Prospective Predictions Are the Future. arXiv preprint arXiv:200611988. 2020.

[7] 

Khan AI, Shah JL, Bhat MM. Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. Computer Methods and Programs in Biomedicine. 2020; 105581.

[8] 

Mahmud T, Rahman MA, Fattah SA. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Computers in Biology and Medicine. 2020; 103869.

[9] 

Wang L, Lin ZQ, Wong A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific Reports. 2020; 10(1): 1–12.

[10] 

Khuzani AZ, Heidari M, Shariati SA. COVID-Classifier: An automated machine learning model to assist in the diagnosis of COVID-19 infection in chest x-ray images. medRxiv. 2020.

[11] 

Das NN, Kumar N, Kaur M, Kumar V, Singh D. Automated deep transfer learning-based approach for detection of COVID-19 infection in chest X-rays. Irbm. 2020.

[12] 

Shorfuzzaman M, Hossain MS. MetaCOVID: A Siamese neural network framework with contrastive loss for n-shot diagnosis of COVID-19 patients. Pattern Recognition. 2020; 107700.

[13] 

Pathak Y, Shukla PK, Tiwari A, Stalin S, Singh S, Shukla PK. Deep Transfer Learning based Classification Model for COVID-19 Disease. IRBM. 2020.

[14] 

Shrivastava VK, Pradhan MK. Deep convolutional neural network based diagnosis of COVID-19 using x-ray images. In: Modelling and Analysis of Active Biopotential Signals in Healthcare, Volume 2. 2053–2563. IOP Publishing; 2020. pp. 13–1 to 13–17.

[15] 

Rajpal S, Lakhyani N, Singh AK, Kohli R, Kumar N. Using handpicked features in conjunction with ResNet-50 for improved detection of COVID-19 from chest X-ray images. Chaos, Solitons & Fractals. 2021; 145: 110749.

[16] 

Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. pp. 1251–1258.

[17] 

Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems; 2012. pp. 1097–1105.

[18] 

Jain G, Mittal D, Thakur D, Mittal MK. A deep learning approach to detect Covid-19 coronavirus with X-Ray images. Biocybernetics and Biomedical Engineering. 2020; 40(4): 1391–1405.

[19] 

Altan A, Karasu S. Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. Chaos, Solitons & Fractals. 2020; 140: 110071.

[20] 

Basu S, Mitra S. Deep Learning for Screening COVID-19 using Chest X-Ray Images. arXiv preprint arXiv:200410507. 2020.

[21] 

Marques G, Agarwal D, de la Torre Díez I. Automated medical diagnosis of COVID-19 through EfficientNet convolutional neural network. Applied Soft Computing. 2020; 96: 106691.

[22] 

Tan M, Le QV. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:190511946. 2019.

[23] 

Rajaraman S, Siegelman J, Alderson PO, Folio LS, Folio LR, Antani SK. Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-rays. arXiv preprint arXiv:200408379. 2020.

[24] 

Rasheed J, Hameed AA, Djeddi C, Jamil A, Al-Turjman F. A machine learning-based framework for diagnosis of COVID-19 from chest X-ray images. Interdisciplinary Sciences: Computational Life Sciences. 2021; 13(1): 103–117.

[25] 

Huang GB, Zhu QY, Siew CK. Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541). Vol. 2. IEEE; 2004. pp. 985–990.

[26] 

Akusok A, Björk KM, Miche Y, Lendasse A. High-performance extreme learning machines: A complete toolbox for big data applications. IEEE Access. 2015; 3: 1011–1025.

[27] 

Karpagachelvi S, Arthanari M, Sivakumar M. Classification of electrocardiogram signals with support vector machines and extreme learning machine. Neural Computing and Applications. 2012; 21(6): 1331–1339.

[28] 

Kim J, Shin HS, Shin K, Lee M. Robust algorithm for arrhythmia classification in ECG using extreme learning machine. Biomedical Engineering Online. 2009; 8(1): 31.

[29] 

Yang J, Xie S, Yoon S, Park D, Fang Z, Yang S. Fingerprint matching based on extreme learning machine. Neural Computing and Applications. 2013; 22(3–4): 435–445.

[30] 

Daliri MR. A hybrid automatic system for the diagnosis of lung cancer based on genetic algorithm and fuzzy extreme learning machines. Journal of Medical Systems. 2012; 36(2): 1001–1005.

[31] 

Rajpal A, Mishra A, Bala R. A Novel fuzzy frame selection based watermarking scheme for MPEG-4 videos using Bi-directional extreme learning machine. Applied Soft Computing. 2019; 74: 603–620.

[32] 

Mishra A, Rajpal A, Bala R. Bi-directional extreme learning machine for semi-blind watermarking of compressed images. Journal of Information Security and Applications. 2018; 38: 71–84.

[33] 

Nian R, He B, Lendasse A. 3D object recognition based on a geometrical topology model and extreme learning machine. Neural Computing and Applications. 2013; 22(3–4): 427–433.

[34] 

Govindarajan S, Swaminathan R. Extreme learning machine based differentiation of pulmonary tuberculosis in chest radiographs using integrated local feature descriptors. Computer Methods and Programs in Biomedicine. 2021; 204: 106058.

[35] 

Ismael AM, Şengür A. The investigation of multiresolution approaches for chest X-ray image based COVID-19 detection. Health Information Science and Systems. 2020; 8(1): 1–11.

[36] 

Rahman T, Chowdhury M, Khandakar A. COVID-19 Radiography Database | Kaggle; 2020. https://www.kaggle.com/tawsifurrahman/covid19-radiography-database.

[37] 

Kermany D, Zhang K, Goldbaum M. Large dataset of labeled optical coherence tomography (oct) and chest x-ray images. Mendeley Data, v3. doi: 10.17632/rscbjbr9sj. 2018; 3.

[38] 

Jain A, Nandakumar K, Ross A. Score normalization in multimodal biometric systems. Pattern Recognition. 2005; 38(12): 2270–2285.

[39] 

Ahmad SA, Taib MN, Khalid NEA, Taib H. An analysis of image enhancement techniques for dental X-ray image interpretation. International Journal of Machine Learning and Computing. 2012; 2(3): 292.

[40] 

Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics. 1973; 6: 610–621.

[41] 

Zare MR, Seng WC, Mueen A. Automatic Classification of medical X-ray Images. Malaysian Journal of Computer Science. 2013; 26(1): 9–22.

[42] 

MathWorks. Texture Analysis Using the Gray-Level Co-Occurrence Matrix (GLCM) – MATLAB & Simulink; 2020.

[43] 

Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). Vol. 1. IEEE; 2005. pp. 886–893.

[44] 

GitHub. Histogram of Oriented Gradients; 2020. https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_hog.html.

[45] 

Xue Z, You D, Candemir S, Jaeger S, Antani S, Long LR, et al. Chest x-ray image view classification. In: 2015 IEEE 28th International Symposium on Computer-Based Medical Systems. IEEE; 2015. pp. 66–71.

[46] 

Kim JK, Park HW. Statistical textural features for detection of microcalcifications in digitized mammograms. IEEE Transactions on Medical Imaging. 1999; 18(3): 231–238.

[47] 

Shree NV, Kumar T. Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network. Brain Informatics. 2018; 5(1): 23–30.

[48] 

Leibstein JM, Nel AL. Detecting tuberculosis in chest radiographs using image processing techniques. University of Johannesburg. 2006.

[49] 

Parveen N, Sathik MM. Detection of pneumonia in chest X-ray images. Journal of X-ray Science and Technology. 2011; 19(4): 423–428.

[50] 

Zargari A, Du Y, Heidari M, Thai TC, Gunderson CC, Moore K, et al. Prediction of chemotherapy response in ovarian cancer patients using a new clustered quantitative image marker. Physics in Medicine & Biology. 2018; 63(15): 155020.

[51] 

Fill JA, Fishkind DE. The Moore-Penrose Generalized Inverse for Sums of Matrices. SIAM Journal on Matrix Analysis and Applications. 2000; 21(2): 629–635.

[52] 

Huang GB, Zhu QY, Siew CK. Extreme learning machine: Theory and applications. Neurocomputing. 2006; 70(1–3): 489–501.

[53] 

Hand DJ, Till RJ. A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning. 2001; 45(2): 171–186.

[54] 

Saygılı A. A new approach for computer-aided detection of coronavirus (COVID-19) from CT and X-ray images using machine learning methods. Applied Soft Computing. 2021; 105: 107323.

[55] 

Ribeiro MT, Singh S, Guestrin C. “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016. pp. 1135–1144.