Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 135.00Impact Factor 2024: 0.9
Intelligent Data Analysis provides a forum for the examination of issues related to the research and applications of Artificial Intelligence techniques in data analysis across a variety of disciplines. These techniques include (but are not limited to): all areas of data visualization, data pre-processing (fusion, editing, transformation, filtering, sampling), data engineering, database mining techniques, tools and applications, use of domain knowledge in data analysis, big data applications, evolutionary algorithms, machine learning, neural nets, fuzzy logic, statistical pattern recognition, knowledge filtering, and post-processing.
In particular, papers are preferred that discuss development of new AI related data analysis architectures, methodologies, and techniques and their applications to various domains.
Papers published in this journal are geared heavily towards applications, with an anticipated split of 70% of the papers published being applications-oriented, research and the remaining 30% containing more theoretical research. Manuscripts should be submitted in *.pdf format only. Please prepare your manuscripts in single space, and include figures and tables in the body of the text where they are referred to. For all enquiries regarding the submission of your manuscript please contact the IDA journal editor: [email protected]
Authors: Wang, Xin | Zhang, Yong | Xu, Junfeng | Gao, Jun
Article Type: Research Article
Abstract: Capturing images through semi-reflective surfaces, such as glass windows and transparent enclosures, often leads to a reduction in visual quality and can adversely affect the performance of computer vision algorithms. As a result, image reflection removal has garnered significant attention among computer vision researchers. With the growing application of deep learning methods in various computer vision tasks, such as super-resolution, inpainting, and denoising, convolutional neural networks (CNNs) have become an increasingly popular choice for image reflection removal. The purpose of this paper is to provide a comprehensive review of learning-based algorithms designed for image reflection removal. Firstly, we provide an …overview of the key terminology and essential background concepts in this field. Next, we examine various datasets and data synthesis methods to assist researchers in selecting the most suitable options for their specific needs and targets. We then review existing methods with qualitative and quantitative results, highlighting their contributions and significance in this field. Finally, some considerations about challenges and future scope in image reflection removal techniques are discussed. Show more
Keywords: Deep learning, reflection removal, reflection separation, systematic literature review
DOI: 10.3233/IDA-230904
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-27, 2024
Authors: Devi, M. Shyamala | Aruna, R. | Almufti, Saman | Punitha, P. | Kumar, R. Lakshmana
Article Type: Research Article
Abstract: Bones collaborate with muscles and joints to sustain and maintain our freedom of mobility. The proper musculoskeletal activity of bone protects and strengthens the brain, heart, and lung function. When a bone is subjected to a force greater than its structural capacity, it fractures. Bone fractures should be detected with the appropriate type and should be treated early to avoid acute neurovascular complications. The manual detection of bone fracture may lead to highly delayed complications like malunion, Joint stiffness, Contractures, Myositis ossificans, and Avascular necrosis. A proper classification system must be integrated with deep learning technology to classify bone fractures …accurately. This motivates me to propose a Systematized Attention Gate UNet (SAG-UNet) that classifies the type of bone fracture with high accuracy. The main contribution of this research is two-fold. The first contribution focuses on dataset preprocessing through feature extraction using unsupervised learning by adapting the Growing Neural Gas (GNG) method. The second contribution deals with refining the supervised learning Attention UNet model that classifies the ten types of bone fracture. The attention gate of the Attention UNet model is refined and applied to the upsampling decoding layer of Attention UNet. The KAGGLE Bone Break Classification dataset was processed to extract only the essential features using GNG extraction. The quantized significant feature RGB X-ray image was divided into 900 training and 230 testing images in the ratio of 80:20. The training images are fitted with the existing CNN models like DenseNet, VGG, AlexNet, MobileNet, EfficientNet, Inception, Xception, UNet and Attention UNet to choose the best CNN model. Experiment results portray that Attention UNet offers the classification of bone fractures with an accuracy of 89% when testing bone break images. Now, the Attention UNet was chosen to refine the Attention gate of the Decoding upsampling layer that occurs after the encoding layer. The Attention Gate of the proposed SAG-UNet forms the gating coefficient from the input feature map and gate signal. The gating coefficient is then processed with batch normalization that centers the aligned features in the active region, thereby leaving the focus on the unaligned weights of feature maps. Then, the ReLU activation function is applied to introduce the nonlinearity in the aligned features, thereby learning the complex representation in the feature vector. Then, dropout is used to exclude the error noise in the aligned weights of the feature map. Then, 1 × 1 linear convolution transformation was done to form the vector concatenation-based attention feature map. This vector has been applied to the sigmoid activation to create the attention coefficient feature map with weights assigned as ‘1’ for the aligned features. The attention coefficient feature map was grid resampled using trilinear interpolation to form the spatial attention weight map, which is passed to the skip connection of the next decoding layer. The implementation results reveal that the proposed SAG-UNet deep learning model classifies the bone fracture types with a high accuracy of 98.78% compared to the existing deep learning models. Show more
Keywords: Activation, attention gate, CNN, classification, convolution, dropout, feature map, normalization, ReLU
DOI: 10.3233/IDA-240431
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-29, 2024
Authors: Anbarasan, M. | Ramesh, K.
Article Type: Research Article
Abstract: The pharmaceutical supply chain, which ensures that drugs are accessible to patients in a trusted process, is a complex arrangement in the healthcare industry. For that, a secure pharmachain framework is proposed. Primarily, the users register their details. Then, the details are converted into cipher text and stored in the blockchain. If a user requests an order, the manufacturer receives the request, and the order is handed to the distributor. Labeling is performed through Hypergeometric Distribution Centroid Selection K-Medoids Clustering (HDCS-KMC) to track the drugs. The healthcare Pharmachain architecture uses IoT to control the supply chain and provide safe medication …tracking. The framework includes security with a classifier and block mining consensus method, boosts performance with a decision controller, and protects user and medication information with encryption mechanisms. After that, the drugs are assigned to vehicles, where the vehicle ID and Internet of Things (IoT) sensor data are collected and pre-processed. Afterward, the pre-processed data is analyzed in the fog node by utilizing a decision controller. Now, the status ID is generated based on vehicle id and location. The generated status ID is meant for fragmentation, encryption, and block mining processes. If a user requests to view the drug’s status ID, then the user needs to get authentication. The user’s forking behavior and request activities were extracted and given to the classifier present in the block-mining consensus algorithm for authentication purposes. Block mining happens after authentication, thereby providing the status ID. Furthermore, the framework demonstrates an efficaciousness in identifying assaults with a low False Positive Rate (FPR) of 0.022483% and a low False Negative Rate (FNR) of 1.996008%. Additionally, compared to traditional methods, the suggested strategy exhibits good precision (97.869%), recall (97.0039%), accuracy (98%), and F-measure (97.999%). Show more
Keywords: Double Transposed-Prime Key-Columnar Transposition Cipher (DT-PK-CTC), Internet of Things (IoT), Hypergeometric Distribution Centroid Selection K-Medoids Clustering (HDCS-KMC), healthcare, pharmachain, Radial Basis Function (RBF)
DOI: 10.3233/IDA-240087
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-25, 2024
Authors: S, Sharmiladevi | S, Siva Sathya
Article Type: Research Article
Abstract: Air pollution is an alarming problem in many cities and countries around the globe. The ability to forecast air pollutant levels plays a crucial role in implementing necessary prevention measures to curb its effects in advance. There are many statistical, machine learning, and deep learning models available to predict air pollutant values, but only a limited number of models take into account the spatio-temporal factors that influence pollution. In this study a novel Deep Learning model that is augmented with Spatio-Temporal Co-Occurrence Patterns (STEEP) is proposed. The deep learning model uses the Closed Spatio-Temporal Co-Occurrence Pattern mining (C-STCOP) algorithm to …extract non-redundant/closed patterns and the Diffusion Convolution Recurrent Neural Network (DCRNN) for time series prediction. By constructing a graph based on the co-occurrence patterns obtained from C-STCOP, the proposed model effectively addresses the spatio-temporal association among monitoring stations. Furthermore, the sequence-to-sequence encoder-decoder architecture captures the temporal dependencies within the time series data. The STEEP model is evaluated using the Delhi air pollutants dataset and shows an average improvement of 8%–13% in RMSE, MAE and MAPE metric compared to the baseline models. Show more
Keywords: PM2.5 pollutant, spatio-temporal patterns, deep learning, time series prediction, delhi, air pollution
DOI: 10.3233/IDA-240028
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Jain, Sachin | Jain, Vishal
Article Type: Research Article
Abstract: There has been extensive use of machine learning (ML) based tools for mathematical symbol and phrase categorization and prediction. Aiming to thoroughly analyze the existing methods for categorizing brain tumors, this paper considers both machine-learning and non-machine-learning approaches. From 2013 to 2023, the writers compiled and reviewed research papers on brain tumor detection. Wiley, IEEE-Explore, Science-Direct, Scopus, ACM-Digital Library, and others provide the relevant data. A systematic literature review examines the efficacy of research methodologies over the last ten years or more by compiling relevant publications and studies from various sources. Accuracy, sensitivity, specificity, and computing efficiency are some of …the criteria that researchers use to evaluate these methods. The availability of labeled data, the required degree of automation and accuracy in the classification process, and the unique dataset are generally the deciding factors in the method choice. This work integrates previous research findings to summarize the current state of brain tumor categorization. This paper summarizes the 169 research papers in brain tumor detection between 2013–2023 and explores the application and development of machine learning methods in brain tumor detection, which has significant research implications and value in the field of brain tumor classification research. All research findings of previous studies are arranged in this paper in the form of research questions and answers format. Show more
Keywords: Brain tumor classification, machine learning, deep learning, SVM, CNN
DOI: 10.3233/IDA-240069
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-32, 2024
Authors: He, Xinyu | Kan, Manfei | Ren, Yonggong
Article Type: Research Article
Abstract: Relation extraction is one of the core tasks of natural language processing, which aims to identify entities in unstructured text and judge the semantic relationships between them. In the traditional methods, the extraction of rich features and the judgment of complex semantic relations are inadequate. Therefore, in this paper, we propose a relation extraction model, HAGCN, based on heterogeneous graph convolutional neural network and graph attention mechanism. We have constructed two different types of nodes, words and relations, in a heterogeneous graph convolutional neural network, which are used to extract different semantic types and attributes and further extract contextual semantic …representations. By incorporating the graph attention mechanism to distinguish the importance of different information, and the model has stronger representation ability. In addition, an information update mechanism is designed in the model. Relation extraction is performed after iteratively fusing the node semantic information to obtain a more comprehensive node representation. The experimental results show that the HAGCN model achieves good relation extraction performance, and its F1 value reaches 91.51% in the SemEval-2010 Task 8 dataset. In addition, the HAGCN model also has good results in the WebNLG dataset, verifying the generalization ability of the model. Show more
Keywords: Relation extraction, heterogeneous graph convolution, graph attention, information update
DOI: 10.3233/IDA-240083
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Anwar, Muhammad | He, Zhiquan | Cao, Wenming
Article Type: Research Article
Abstract: At the core of Deep Learning-based Deformable Medical Image Registration (DMIR) lies a strong foundation. Essentially, this network compares features in two images to identify their mutual correspondence, which is necessary for precise image registration. In this paper, we use three novel techniques to increase the registration process and enhance the alignment accuracy between medical images. First, we propose cross attention over multi-layers of pairs of images, allowing us to take out the correspondences between them at different levels and improve registration accuracy. Second, we introduce a skip connection with residual blocks between the encoder and decoder, helping information flow …and enhancing overall performance. Third, we propose the utilization of cascade attention with residual block skip connections, which enhances information flow and empowers feature representation. Experimental results on the OASIS data set and the LPBA40 data set show the effectiveness and superiority of our proposed mechanism. These novelties contribute to the enhancement of 3D DMIR-based on unsupervised learning with potential implications in clinical practice and research. Show more
Keywords: Deformable medical image registration, similarity measures, deep learning, convolutional neural networks
DOI: 10.3233/IDA-230692
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-19, 2024
Authors: Strada, Silvia | Costantini, Emanuele | Formentin, Simone | Savaresi, Sergio M.
Article Type: Research Article
Abstract: The Usage-Based Insurance paradigm, which is receiving a lot of attention in recent years, envisages computing the car policy premium based on accident risk probability, evaluated observing the past driving history and habits. However, Usage-Based Insurance strategies are usually based on simple empirical decision rules built on travelled distance. The development of intelligent systems for smart risk prediction using the stored overall driving behaviour, without the need of other insurance or socio-demographic information, is still an open challenge. This work aims at exploring a comprehensive machine learning-based approach solely based on driving-related data of private vehicles. The anonymized dataset employed …in this study is provided by the telematics company UnipolTech, and contains space/time densely measured data related to trips of almost 100000 vehicles uniformly spread on the Italian territory, recorded every 2 km by on-board telematics fix devices (black boxes), from February 2018 to February 2020. An innovative feature engineering process is proposed, with the aim of uncovering novel informative quantities able to disclose complex aspects of driving behaviour. Recent and powerful learning techniques are explored to develop advanced predictive models, able to provide a reliable accident probability for each vehicle, automatically managing the critical imbalance intrinsically peculiar this kind of datasets. Show more
Keywords: Mobility data, usage-based insurance, machine learning, driving behavior, accident risk prediction
DOI: 10.3233/IDA-230971
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: P, ThamilSelvi C | S, Vinoth Kumar | Asaad, Renas Rajab | Palanisamy, Punitha | Rajappan, Lakshmana Kumar
Article Type: Research Article
Abstract: Technological developments in medical image processing have created a state-of-the-art framework for accurately identifying and classifying brain tumors. To improve the accuracy of brain tumor segmentation, this study introduced VisioFlow FusionNet, a robust neural network architecture that combines the best features of DeepVisioSeg and SegFlowNet. The proposed system uses deep learning to identify the cancer site from medical images and provides doctors with valuable information for diagnosis and treatment planning. This combination provides a synergistic effect that improves segmentation performance and addresses challenges encountered across various tumor shapes and sizes. In parallel, robust brain tumor classification is achieved using NeuraClassNet, …a classification component optimized with a dedicated catfish optimizer. NeuraClassNet’s convergence and generalization capabilities are powered by the Cat Fish optimizer, which draws inspiration from the adaptive properties of aquatic predators. By complementing a comprehensive diagnostic pipeline, this classification module helps clinicians accurately classify brain tumors based on various morphological and histological features. The proposed framework outperforms current approaches regarding segmentation accuracy (99.2%) and loss (2%) without overfitting. Show more
Keywords: VisioFlow FusionNet, brain tumor segmentation, NeuraClassNet, cat fish optimizer, medical image analysis, deep learning
DOI: 10.3233/IDA-240108
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-26, 2024
Authors: Megala, G. | Swarnalatha, P.
Article Type: Research Article
Abstract: Video grounding intends to perform temporal localization in multimedia information retrieval. The temporal bounds of the target video span are determined for the given input query. A novel interactive multi-head self-attention (IMSA) transformer is proposed to localize an unseen moment in the untrimmed video for the given image. A new semantic-trained self-supervised approach is considered in this paper to perform cross-domain learning to match the image query – video segment. It normalizes the convolution function enabling efficient correlation and collecting of semantically related video segments across time based on the image query. A double hostile Contrastive learning with Gaussian distribution …parameters method is advanced to learn the representations of video. The proposed approach performs dynamically on various video components to achieve exact semantic synchronization and localization among queries and video. In the proposed approach, the IMSA model localizes frames greatly compared to other approaches. Experiments on benchmark datasets show that the proposed model can significantly increase temporal grounding accuracy. The moment occurrence is identified in the video with a start and end boundary ascertains an average recall of 86.45% and a mAP of 59.3%. Show more
Keywords: Contrastive learning, gaussian parameter, self-attention transformer, temporal localization, video grounding
DOI: 10.3233/IDA-240138
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: K, Vijay | Jayashree, K.
Article Type: Research Article
Abstract: Content-Based Image Retrieval (CBIR) uses complicated algorithms to analyze visual attributes and retrieve relevant photos from large databases. CBIR is essential to a privacy-preserving feature extraction and protection method for outsourced picture representation. SecureImageSec combines essential methods with the system’s key entities to ensure secure, private and protected image feature processing during outsourcing. For a system to be implemented effectively, these techniques must be seamlessly integrated across critical entities, such as the client, the cloud server that is being outsourced, the component that protects secure features, the component that maintains privacy in communication, access control, and authorization, and the integration …and system evaluation. The client entity initiates outsourcing using advanced encryption techniques to protect privacy. SecureImageSec protects outsourced data by using cutting-edge technologies like Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (SMPC). Cloud servers hold secure feature protection entities and protect outsourced features’ privacy and security. SecureImageSec uses AES and FPE to protect data format. SecureImageSec’s cloud-outsourced privacy-preserving communication uses SSL/TLS and QKD to protect data transmission. Attribute-Based Encryption (ABE) and Functional Encryption (FE) in SecureImageSec limit access to outsourced features based on user attributes and allow fine-grained access control over decrypted data. SecureImageSec’s Information Leakage Rate (ILR) of 0.02 for a 1000-feature dataset shows its efficacy. SecureImageSec also achieves 4.5 bits of entropy, ensuring the encrypted feature set’s muscular cryptographic strength and randomness. Finally, SecureImageSec provides secure and private feature extraction and protection, including CBIR capabilities, for picture representation outsourcing. Show more
Keywords: Content-based image retrieval, Homomorphic Encryption, SecureImageSec, Quantum Key Distribution, Cloud computing
DOI: 10.3233/IDA-240265
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-22, 2024
Authors: Shantal, Mohammed | Othman, Zalinda | Abu Bakar, Azuraliza
Article Type: Research Article
Abstract: Missing data is one of the challenges a researcher encounters while attempting to draw information from data. The first step in solving this issue is to have the data stage ready for processing. Much effort has been made in this area; removing instances with missing data is a popular method for handling missing data, but it has drawbacks, including bias. It will be impacted negatively on the results. How missing values are handled depends on several vectors, including data types, missing rates, and missing mechanisms. It covers missing data patterns as well as missing at random, missing at completely random, …and missing not at random. Other suggestions include using numerous imputation techniques divided into various categories, such as statistical and machine learning methods. One strategy to improve a model’s output is to weight the feature values to better the performance of classification or regression approaches. This research developed a new imputation technique called correlation coefficient min-max weighted imputation (CCMMWI). It combines the correlation coefficient and min-max normalization techniques to balance the feature values. The proposed technique seeks to increase the contribution of features by considering how those elements relate to the desired functionality. We evaluated several established techniques to assess the findings, including statistical techniques, mean and EM imputation, and machine learning imputation techniques, including k -NNI, and MICE. The evaluation also used the imputation techniques CBRL, CBRC, and ExtraImpute. We use various sizes of datasets, missing rates, and random patterns. To compare the imputed datasets and original data, we finally provide the findings and assess them using the root mean squared error (RMSE), mean absolute error (MAE), and R 2 . According to the findings, the proposed CCMMWI performs better than most other solutions in practically all missing-rate scenarios. Show more
Keywords: Missing data, imputation method, feature weighting, correlation coefficient, data standardization, Min-Max normalization, classification method, regression method
DOI: 10.3233/IDA-230140
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Liao, Shu-Hsien | Widowati, Retno | Liao, Shu-Ting
Article Type: Research Article
Abstract: A recommender system is an information filtering system used to predict a user’s rating or preference for an item. Dietary preferences are often influenced by various etiquettes and culture, such as appetite, the selection of ingredients, menu development, cooking methods, choice of tableware, seating arrangement of diners, order of eating, etc. Food delivery service is a courier service in that delivers food to customers by restaurants, stores, or independent delivery companies. With the continuous advances in information systems and data science, recommender systems are gradually developing towards to intentional and behavioral recommendations. Behavioral recommendation is an extension of peer-to-peer recommendation, …where merchants find the people who want to buy the product and deliver it. Intentional recommendation is a mindset that seeks to understand the life of consumers; by continuously collecting information about their actions on the internet and displaying events and information that match the life and purchase preferences of consumers. This study considers that data targeting is a method by which food delivery service platforms can understand consumers’ dietary preferences and individual lifestyles so that the food delivery service platform can effectively recommend food to the consumer. Thus, this study implements two stages data mining analytics, including clustering analysis and association rules, to investigate Taiwanese food consumers (n = 2,138) to investigate dietary and food delivery services behaviors and preferences to find knowledge profiles/patterns/rules for food intentional and behavioral recommendations. Finally, discussion and implications are presented. Show more
Keywords: Dietary preference, food delivery services, food intentional and behavioral recommendation, clustering analysis, association rules
DOI: 10.3233/IDA-240664
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-29, 2024
Authors: Sethi, Priyanshi | Bhardwaj, Rhythm | Sharma, Nonita | Sharma, Deepak Kumar | Srivastava, Gautam
Article Type: Research Article
Abstract: Neural style transfer is used as an optimization technique that combines two different images – a content image and a style reference image – to produce an output image that retains the appearance of the content image but has been modified to match the actual style of the style reference image. This is achieved by fine-tuning the output image to match the style reference images and the statistics for both content and style in the content image. These statistics are extracted from the images using a convolutional network. Primitive models such as WCT were improved upon by models such as PhotoWCT, whose …spatial and temporal limitations were improved upon by Deep Photo Style Transfer. Eventually, wavelet transforms were introduced to perform photorealistic style transfer. A wavelet-corrected transfer based on whitening and colouring transforms, i.e., WCT2 , was proposed that allowed the preservation of core content and eliminated the need for any post-processing steps and constraints. A model called Domain-Aware Universal Style Transfer also came into the picture. It supported both artistic and photorealistic style transfer. This study provides an overview of the neural style transfer technique. The recent advancements and improvements in the field, including the development of multi-scale and adaptive methods and the integration of semantic segmentation, are discussed and elaborated upon. Experiments have been conducted to determine the roles of encoder-decoder architecture and Haar wavelet functions. The optimum levels at which these can be leveraged for effective style transfer are ascertained. The study also highlights the contrast between VGG-16 and VGG-19 structures and analyzes various performance parameters to establish which works more efficiently for particular use cases. On comparing quantitative metrics across Gatys, AdaIN, and WCT, a gradual upgrade was seen across the models, as AdaIN was performing 99.92 percent better than the primitive Gatys model in terms of processing time. Over 1000 iterations, we found that VGG-16 and VGG-19 have comparable style loss metrics, but there is a difference of 73.1 percent in content loss. VGG-19, however, is displaying a better overall performance since it can keep both content and style losses at bay. Show more
Keywords: Content image, style image, VGG, photorealism
DOI: 10.3233/IDA-230765
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Chen, Hongwei | Zhang, Man | Liu, Fangrui | Chen, Zexi
Article Type: Research Article
Abstract: Due to the rapid development of industrial manufacturing technology, modern mechanical equipment involves complex operating conditions and structural characteristics of hardware systems. Therefore, the state of components directly affects the stable operation of mechanical parts. To ensure engineering reliability improvement and economic benefits, bearing diagnosis has always been a concern in the field of mechanical engineering. Therefore, this article studies an effective machine learning method to extract useful fault feature information from actual bearing vibration signals and identify bearing faults. Firstly, variational mode decomposition decomposes the source signal into several intrinsic mode functions according to the actual situation. The vibration …signal of the bearing is decomposed and reconstructed. By iteratively solving the variational model, the optimal modulus function can be obtained, which can better describe the characteristics of the original signal. Then, the feature subset is efficiently searched using the wrapper method of feature selection and the improved binary salp swarm algorithm (IBSSA) to effectively reduce redundant feature vectors, thereby accurately extracting fault feature frequency signals. Finally, support vector machines are used to classify and identify fault types, and the advantages of support vector machines are verified through extensive experiments, improving the ability of global search potential solutions. The experimental findings demonstrate the superior fault recognition performance of the IBSSA algorithm, with a highest recognition accuracy of 97.5%. By comparing different recognition methods, it is concluded that this method can accurately identify bearing failure. Show more
Keywords: Fault diagnosis, salp swarm algorithm, feature selection, variational mode decomposition
DOI: 10.3233/IDA-230994
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-26, 2024
Authors: Wang, Qian | Li, Tie-Qiang | Sun, Haicheng | Yang, Hao | Li, Xia
Article Type: Research Article
Abstract: Magnetic Resonance Imaging (MRI) is a cornerstone of modern medical diagnosis due to its ability to visualize intricate soft tissues without ionizing radiation. However, noise artifacts significantly degrade image quality, hindering accurate diagnosis. Traditional denoising methods struggle to preserve details while effectively reducing noise. While deep learning approaches show promise, they often focus on local information, neglecting long-range dependencies. To address these limitations, this study proposes the deep and shallow feature fusion denoising network (DAS-FFDNet) for MRI denoising. DAS-FFDNet combines shallow and deep feature extraction with a tailored fusion module, effectively capturing both local and global image information. This approach …surpasses existing methods in preserving details and reducing noise, as demonstrated on publicly available T1-weighted and T2-weighted brain image datasets. The proposed model offers a valuable tool for enhancing MRI image quality and subsequent analyses. Show more
Keywords: Magnetic resonance imaging (MRI), image denoising, deep learning, UNet, convolutional neural networks (CNNs)
DOI: 10.3233/IDA-240613
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: S, Sadesh | Chandrasekaran, Gokul | Thangaraj, Rajasekaran | Kumar, Neelam Sanjeev
Article Type: Research Article
Abstract: The promising Network-on-Chip (NoC) model replaces the existing system-on-chip (SoC) model for complex VLSI circuits. Testing the embedded cores using NoC incurs additional costs in these SoC models. NoC models consist of network interface controllers, Internet Protocol (IP) data centers, routers, and network connections. Technological advancements enable the production of more complex chips, but longer testing times pose a potential problem. NoC packet switching networks provide high-performance interconnection, a significant benefit for IP cores. A multi-objective approach is created by integrating the benefits of the Whale Optimization Algorithm (WOA) and Grey Wolf Optimization (GWO). In order to minimize the duration …of testing, the approach implements optimization algorithms that are predicated on the behavior of grey wolves and whales. The P22810 and D695 benchmark circuits are under consideration. We compare the test time with existing optimization techniques. We assess the effectiveness of the suggested hybrid WOA-GWO algorithm using fourteen established benchmark functions and an NP-hard problem. This proposed method minimizes the time needed to test the P22810 benchmark circuit by 69%, 46%, 60%, 19%, and 21% compared to the Modified Ant Colony Optimization, Modified Artificial Bee Colony, WOA, and GWO algorithms. In the same vein, the proposed method reduces the testing time for the d695 benchmark circuit by 72%, 49%, 63%, 21%, and 25% in comparison to the same algorithms. We experimented to determine the time savings achieved by adhering to the suggested procedure throughout the testing process. Show more
Keywords: Test time, whale optimization algorithm, test access mechanism, grey wolf optimization, test scheduling, network-on-chip
DOI: 10.3233/IDA-240878
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-20, 2024
Authors: Wu, Renhui | Xu, Hui | Rui, Xiaobin | Wang, Zhixiao
Article Type: Research Article
Abstract: With the rapid development and popularization of smart mobile devices, users tend to share their visited points-of-interest (POIs) on the network with attached location information, which forms a location-based social network (LBSN). LBSNs contain a wealth of valuable information, including the geographical coordinates of POIs and the social connections among users. Nowadays, lots of trust-enhanced approaches have fused the trust relationships of users together with other auxiliary information to provide more accurate recommendations. However, in the traditional trust-aware approaches, the embedding processes of the information on different graphs with different properties (e.g., user-user graph is an isomorphic graph, user-POI graph …is a heterogeneous graph) are independent of each other and different embedding information is directly fused together without guidance, which limits their performance. More effective information fusion strategies are needed to improve the performance of trust-enhanced recommendation. To this end, we propose a T rust E nhanced POI recommendation approach with C ollaborative L earning (TECL) to merge geographic information and social influence. Our proposed model integrates two modules, a GAT-based graph autoencoder as trust relationships embedding module and a multi-layer deep neural network as a user-POI graph learning module. By applying collaborative learning strategy, these two modules can interact with each other. The trust embedding module can guide the selection of user’s potential features, and in turn the user-POI graph learning module enhances the embedding process of trust relationships. Different information is fused through the two-way interaction of information, instead of travelling in one direction. Extensive experiments are conducted using real-world datasets, and results illustrate that our suggested approach outperforms state-of-the-art methods. Show more
Keywords: Collaborative learning, graph attention network, location-based social network, point-of-interest recommendation
DOI: 10.3233/IDA-230897
Citation: Intelligent Data Analysis, vol. Pre-press, no. Pre-press, pp. 1-19, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]