Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 315.00Impact Factor 2023: 2
The purpose of the Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology is to foster advancements of knowledge and help disseminate results concerning recent applications and case studies in the areas of fuzzy logic, intelligent systems, and web-based applications among working professionals and professionals in education and research, covering a broad cross-section of technical disciplines.
The journal will publish original articles on current and potential applications, case studies, and education in intelligent systems, fuzzy systems, and web-based systems for engineering and other technical fields in science and technology. The journal focuses on the disciplines of computer science, electrical engineering, manufacturing engineering, industrial engineering, chemical engineering, mechanical engineering, civil engineering, engineering management, bioengineering, and biomedical engineering. The scope of the journal also includes developing technologies in mathematics, operations research, technology management, the hard and soft sciences, and technical, social and environmental issues.
Authors: Senthamil Selvi, M. | Senthamizh Selvi, R. | Subbaiyan, Saranya | Murshitha Shajahan, M.S.
Article Type: Research Article
Abstract: Accurate prediction of grid loss in power distribution networks is pivotal for efficient energy management and pricing strategies. Traditional forecasting approaches often struggle to capture the complex temporal dynamics and external influences inherent in grid loss data. In response, this research presents a novel hybrid time-series deep learning model: Gated Recurrent Units with Temporal Convolutional Networks (GRU-TCN), designed to enhance grid loss prediction accuracy. The proposed model integrates the temporal sensitivity of GRU with the local context awareness of TCN, exploiting their complementary strengths. A learnable attention mechanism fuses the outputs of both architectures, enabling the model to discern significant …features for accurate prediction. The model is evaluated using well-established metrics across distinct temporal phases: training, testing, and future projection. Results showcase Resulting in encouraging Figures for mean absolute error, root mean squared error, and mean absolute percentage error, the model’s capacity to capture both long-term trends and transitory patterns. The GRU-TCN hybrid model represents a pioneering approach to power grid loss prediction, offering a flexible and precise tool for energy management. This research not only advances predictive accuracy but also lays the foundation for a smarter and more sustainable energy ecosystem, poised to transform the landscape of energy forecasting. Show more
Keywords: Accurate prediction, grid loss, power distribution networks
DOI: 10.3233/JIFS-235579
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-10, 2024
Authors: Abuhoureyah, Fahd | Yan Chiew, Wong | Zitouni, M. Sami
Article Type: Research Article
Abstract: Human Activity Recognition (HAR) utilizing Channel State Information (CSI) extracted from WiFi signals has garnered substantial interest across various domains and applications. This field’s potential paths and applications extend beyond CSI-based HAR and include smart homes, assisted living, security, gaming, surveillance, and context-aware computing. The ability of deep learning algorithms to effectively process and interpret CSI data opens up new possibilities for accurate and robust human activity recognition in real-world scenarios. However, traditional Recurrent Neural Networks (RNN) models, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), rely solely on their internal memory cells to maintain information over …time. Important details might be diluted or lost within the memory cells in complex CSI sequences. To address this limitation, we propose a lightweight approach that incorporates a multi-head adaptive attention weight mechanism MHAAM into the HAR framework. The multi-head attention mechanism allows the model to attend to different informative patterns within the CSI data simultaneously, capturing fine-grained temporal dependencies and improving the model’s ability to recognize complex activities. The implemented models effectively filter out noise and irrelevant information by assigning higher weights to informative CSI features, further enhancing activity classification accuracy. Experimental evaluations and comparative analyses of HAR for seven activities demonstrate that attention-based RNN models with multi-head attention consistently outperform traditional RNN models. The multi-head attention mechanism achieves improved generalization and testing for seven common human activities and environments, leading to a higher complex human activity classification accuracy of up to 98.5%. Show more
Keywords: Multi-head adaptive attention mechanism, channel state information (CSI), WiFi sensing, activity recognition, WiFi sensing, MHAAM
DOI: 10.3233/JIFS-234379
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Singh, Pardeep | Lamsal, Rabindra | Singh, Monika | Shishodia, Bhawna | Sitaula, Chiranjibi | Chand, Satish
Article Type: Research Article
Abstract: Social media platforms play a crucial role in providing valuable information during crises, such as pandemics. The COVID-19 pandemic has created a global public health crisis, and vaccines are the key preventive measure for achieving herd immunity. However, some individuals use social media to oppose vaccines, undermining government efforts to eliminate the virus. This study introduces the “GeoCovaxTweets” dataset, consisting of 1.8 million geotagged tweets related to COVID-19 vaccines from January 2020 to November 2022, originating from 233 countries and territories. Each tweet includes state and country information, enabling researchers to analyze global spatial and temporal patterns. An extensive set …of analyses are performed on the dataset to identify prominent topic clusters and explore public opinions across different vaccines and vaccination contexts. The study outlines the dataset curation methodology and provides instructions for local reproduction. We anticipate that the dataset will be valuable for crisis computing researchers, facilitating the exploration of Twitter conversations surrounding COVID-19 vaccines and vaccination, including trends, opinion shifts, misinformation, and anti-vaccination campaigns. Show more
Keywords: COVID-19 discourse, COVID-19 pandemic, sentiment analysis, social media, topic clustering, twitter dataset
DOI: 10.3233/JIFS-219418
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Article Type: Research Article
Abstract: The recognition and regulation of buildings are essential aspects of urban management to prevent illegal constructions and maintain public safety and resources. Traditional machine learning methods for building recognition often suffer from low accuracy and weak generalization capabilities due to their reliance on manually designed features. Traditional machine learning methods for building recognition often suffer from low accuracy and weak generalization capabilities due to their reliance on manually designed features. Therefore, the study of automatic, accurate building identification method is very necessary. Based on this, Introducing advanced algorithms like Faster R-CNN and DRNet signifies a significant step towards automating accurate …building identification. The utilization of Faster R-CNN as a basic training model combined with DRNet demonstrates promising results in accurately recognizing buildings. The experimental analysis highlights the potential of the proposed method, achieving an impressive 82.1% mean Average Precision (mAP) for landmark buildings. Accurate prediction of building coordinates further strengthens the effectiveness of the proposed approach. Comparative analysis showcases the superiority of the proposed model in recognizing buildings not only in normal images but also in complex environmental settings. The successful implementation of advanced algorithms in building recognition contributes to more efficient urban management and development. Continued research in automatic building identification methods is crucial for addressing challenges in urban planning and management, ensuring sustainable city development. Show more
Keywords: Deep learning, Faster R-CNN, building identification, classification algorithm, building extraction, urbanization
DOI: 10.3233/JIFS-241838
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-10, 2024
Authors: Lamani, Dharmanna | Shanthi, T.S. | Kirubakaran, M.K. | Roopa, R.
Article Type: Research Article
Abstract: Accurately classifying products in e-commerce is critical for enhancing user experience, but it remains challenging due to data quality issues and the dynamic nature of product categories. Customers are increasingly relying on visual information to make informed purchasing decisions, emphasizing the importance of accurate product classification using images. In this paper, an innovative approach called SSWSO_LeNet is proposed for product image classification in e-commerce. The method involves preprocessing the input images using Region of Interest (RoI) and Adaptive Wiener Filters to improve image quality and reduce unwanted distortions. Data augmentation techniques are then applied to increase the diversity of the …dataset and the robustness of the model. To address this, we propose SSWSO_LeNet, integrating Squirrel Search Algorithm (SSA) and War Strategy Optimization (WSO) with LeNet. SSA mimics southern flying squirrels’ foraging behavior to find global optima efficiently, while WSO balances exploration and exploitation stages, enhancing classification accuracy. Experimental results show SSWSO_LeNet outperforms state-of-the-art models with an impressive accuracy of 0.976, sensitivity of 0.877, and specificity of 0.857. By leveraging SSA, WSO, and LeNet, SSWSO_LeNet not only improves classification accuracy but also reduces reliance on human editors, decreasing both cost and time in e-commerce product classification. Show more
Keywords: E-commerce, SSA, WSO, SSWSO_LeNet, product classification
DOI: 10.3233/JIFS-241682
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Tripathi, Diwakar | Reddy, B. Ramachandra | Dwivedi, Shubhra | Shukla, Alok Kumar | Chandramohan, D. | Dewangan, Ram Kishan
Article Type: Research Article
Abstract: Nature-inspired algorithms as problem-solving methodologies are extremely effective in discovery of optimized solutions in multi-dimensional and multi-modal problems. Because of qualities like “self-optimization”, “flexibility” and etc., nature-inspired algorithms for problem solving are effectively optimal. Feature selection is an approach to find approximate optimal subset of the features which are more relevant towards the particular outcome. In this study, we focused on how feature selection may improve the credit scoring model’s performance for prediction. Nature-inspired algorithms are applied for feature selection to improve the predictive performance of the credit scoring model. Additionally, four benchmark credit scoring datasets collected from the UCI …repository are used to test feature selection by several Nature-inspired algorithms aggregated with “Random Forest (RF)”, “Logistic Regression (LR),” and “Multi-layer Perceptron (MLP)” for classification and results are compared in terms of classification accuracy and G-measures. Show more
Keywords: Nature-inspired algorithms, credit score, feature selection, classification
DOI: 10.3233/JIFS-219413
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-11, 2024
Authors: Faraz, Ansar Ali | Khan, Hina | Aslam, Muhammad | Albassam, Mohammed
Article Type: Research Article
Abstract: When data are hazy or uncertain, estimators given under classical statistics are ineffective. Given that it deals with uncertainty, neutrosophic statistics is the sole alternative. Due to the vast range of applications, extensive research has been done in this area. The objective of this study is to determine the most accurate predictions for the population mean with the least amount of mean square error. We have created neutrosophic ratio type estimators, when working with ambiguous, hazy, and neutrosophic-type data, the proposed estimation methods are very useful for computing results. These estimators produce findings that are not single-valued but rather have …an interval form, where our population parameter may lie more frequently. Since we have an estimated interval with the unknown population mean value given a minimal mean square error, it improves the estimators’ efficiency. Real life neutrosophic line losses data and simulation are both used to analyze the effectiveness of the proposed neutrosophic ratio-type estimators. Additionally, a comparison is made to show how helpful Neutrosophic ratio type estimator is in comparison to existing estimators. Show more
Keywords: Neutrosophic, conventional statistics, estimation, ratio estimators, mean square error
DOI: 10.3233/JIFS-240153
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Saravanan, Krithikha Sanju | Bhagavathiappan, Velammal
Article Type: Research Article
Abstract: The advancements in technology, particularly in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI) can be advantageous for the agricultural sector to enhance the yield. Establishing an agricultural ontology as part of the development would spur the expansion of cross-domain agriculture. Semantic and syntactic knowledge of the domain data is required for building such a domain-based ontology. To process the data from text documents, a standard technique with syntactic and semantic features are needed because the availability of pre-determined agricultural domain-based data is insufficient. In this research work, an Agricultural Ontologies Construction framework (AOC) is proposed for …creating the agricultural domain ontology from text documents using NLP techniques with Robustly Optimized BERT Approach (RoBERTa) model and Graph Convolutional Network (GCN). The anaphora present in the documents are resolved to produce precise ontology from the input data. In the proposed AOC work, the domain terms are extracted using the RoBERTa model with Regular Expressions (RE) and the relationships between the domain terms are retrieved by utilizing the GCN with RE. When compared to other current systems, the efficacy of the proposed AOC method achieves an exceptional result, with precision and recall of 99.6% and 99.1% respectively. Show more
Keywords: Anaphora resolution, term extraction, relationships identification, RoBERTa model, regular expressions, graph convolutional network, domain ontology
DOI: 10.3233/JIFS-237632
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-19, 2024
Authors: Immanuel, Rajeswari Rajesh | Sangeetha, S.K.B.
Article Type: Research Article
Abstract: Human emotions are the mind’s responses to external stimuli, and due to their dynamic and unpredictable nature, research in this field has become increasingly important. There is a growing trend in utilizing deep learning and machine learning techniques for emotion recognition through EEG (electroencephalogram) signals. This paper presents an investigation based on a real-time dataset that comprises 15 subjects, consisting of 7 males and 8 females. The EEG signals of these subjects were recorded during exposure to video stimuli. The collected real-time data underwent preprocessing, followed by the extraction of features using various methods tailored for this purpose. The study …includes an evaluation of model performance by comparing the accuracy and loss metrics between models applied to both raw and preprocessed data. The paper introduces the EEGEM (Electroencephalogram Ensemble Model), which represents an ensemble model combining LSTM (Long Short-Term Memory) and CNN (Convolutional Neural Network) to achieve the desired outcomes. The results demonstrate the effectiveness of the EEGEM model, achieving an impressive accuracy rate of 95.56%. This model has proven to surpass the performance of other established machine learning and deep learning techniques in the field of emotion recognition, making it a promising and superior tool for this application. Show more
Keywords: EEG signal, emotion, CNN, LSTM, ensemble learning, feature extraction
DOI: 10.3233/JIFS-237884
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Srinivasan, Manohar | Senthilkumar, N.C.
Article Type: Research Article
Abstract: The Internet of Things (IoT) has many potential uses in the day-to-day operations of individuals, companies, and governments. It makes linking all devices to the internet a realistic possibility. Convincing IoT devices to work together to implement several real-world applications is a challenging feat. Security issues impact innovative platform applications due to the current security state in IoT-based operations. As a result, intrusion detection systems (IDSs) tailored to IoT platforms are essential for protecting against security breaches caused by the Internet of Things (IoT) that exploit its vulnerabilities. Issues with data loss, dangers, service interruption, and external hostile assaults are …all part of the IoT security landscape. Designing and implementing appropriate security solutions for IoT environments is the main emphasis of this research. Within the Internet of Things (IoT) context, this research creates a Spotted Hyena Optimizer (SHO-EDLID) method for intrusion detection using ensemble deep learning. The main goal of the demonstrated SHO-EDLID method was to detect and categorize intrusions in an Internet of Things setting. It comprises many subprocesses, including feature selection, categorization, and pre-processing. The SHO-EDLID method uses a SHO-based feature selection strategy to identify the best feature subsets. It then used an ensemble of three DL models— a deep belief network (DBN), a stacked autoencoder (SAE), and a bidirectional recurrent neural network (BiRNN)— to detect and name cyberattacks. Finally, the DL models’ parameters are tuned using the Adabelief optimizer. A comprehensive simulation was run to illustrate that the offered model performed better. According to a thorough comparative analysis, the suggested method outperformed other recent approaches. Purpose of the Manuscript : To identify the best feature subsets, the SHO-EDLID method used the SHO-based feature selection method... Afterward, cyberattack identification and tracking were carried out using an ensemble of three DL models: DBN, SAE, and BiRNN. The final step in optimizing the DL models’ parameters is the Adabelief optimizer. The main comparative results : The proposed model present the Comparative analysis of SHO-EDLID algorithm with other existing systems and its outperform the performance in precision 97.50, accuracy 99.56, Recall 98.42, F-Measure.97.95. Show more
Keywords: Security, internet of things, deep learning, ensemble learning, spotted hyena optimizer
DOI: 10.3233/JIFS-240571
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-11, 2024
Authors: Yang, Cheng | Xu, Xinrui
Article Type: Research Article
Abstract: The quality of building materials will affect the implementation effect of construction projects. To ensure the service capacity of building materials, it is necessary to do a good job in selecting suppliers. In the specific evaluation of building material suppliers, after evaluation, suppliers with poor quality are excluded to ensure the quality of material supply, reasonably improve the construction effect of the building project, meet the construction needs of the building project, and improve the quality of the building project. The selection and application of building material suppliers (BMSs) is a multiple-attribute group decision-making (MAGDM) technique. In this study, the …2-tuple linguistic neutrosophic number combined grey relational analysis (2TLNN-CGRA) technique is constructed based on the classical grey relational analysis (GRA) and 2-tuple linguistic neutrosophic sets (2TLNNSs). Finally, a numerical example for building material supplier selection was constructed and some comparisons is constructed to illustrate the 2TLNN-CGRA technique. The main contribution of this study is constructed: (1) the 2TLNN-CGRA technique is implemented to cope with the MAGDM under 2TLNSs; (2) the 2TLNN-CGRA technique is implemented in line with the 2TLNN Hamming distance (2TLNNHD) and 2TLNN Euclidean distance (2TLNNED) simultaneously under 2TLNSs; (3) the numerical example for building material supplier selection is implemented to show the 2TLNN-CGRA technique; and (4) some efficient comparative studies are constructed with several existing decision techniques. Show more
Keywords: Multiple-attribute group decision-making (MAGDM), 2TLNSs, 2TLNN-CGRA technique, building material suppliers
DOI: 10.3233/JIFS-221334
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Liu, Dapeng
Article Type: Research Article
Abstract: In order to improve the remanufacturing efficiency of scrap mechanical parts and comprehensively detect their surface fault status, this paper proposes a color three-dimensional reconstruction method of scrap mechanical parts based on an improved semi-global matching (SGM) algorithm. In experiments, this method demonstrated significant performance advantages in dealing with complex mechanical component structures and large illumination interference environments. Experimental results show that the three-dimensional color model reconstructed by this method has clear texture and small dimensional error, and is suitable for online analysis of surface fault information of scrap mechanical parts in actual production lines. Through quantitative analysis, compared with …the traditional SGM method, the method in this paper improves the structural similarity index (SSIM) by an average of 19.8% and reduces the mean square error (MSE) by an average of 33.1%. Show more
Keywords: Waste mechanical parts, binocular vision, SGM, Color 3D reconstruction
DOI: 10.3233/JIFS-237214
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Jansi Rani, J. | Manivannan, A.
Article Type: Research Article
Abstract: This paper focuses on solving the fully fuzzy transportation problem in which the parameters are triangular Type-2 fuzzy numbers due to the instinctive of human imprecision. To deal with uncertainty more precisely, a triangular Type-1 fuzzy transportation problem is reformed as a transportation problem with triangular Type-2 fuzzy parameters in this paper. In order to compare triangular Type-2 fuzzy numbers, a new ranking(ordering) technique is proposed by extending the Yager’s function. However, two efficient algorithmic approaches namely, triangular Type-2 fuzzy zero suffix method (TT2FZSM) and triangular Type-2 fuzzy zero average method (TT2FZAM) are proposed to generate the initial transportation cost …of the fully triangular Type-2 fuzzy transportation problem. Both TT2FZSM and TT2FZAM are converging towards an optimal solution. In addition to TT2FZSM and TT2FZAM, the modified distribution method is applied to ensure optimality. Subsequently, we carry out a comprehensive discussion of the obtained results to establish the validation of the proposed approach. Show more
Keywords: Transportation problem, triangular type-2 fuzzy number, ranking function, optimal solution
DOI: 10.3233/JIFS-237652
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Yan, Huiming | Yan, Zilin | Wang, Weiling | Liu, Shuyue
Article Type: Research Article
Abstract: In recent years, the burgeoning imperative of energy-efficient building management practices has surged dramatically, underscoring an urgent mandate for comprehensive studies that integrate cutting-edge optimization algorithms with precise heating load forecasting techniques. These studies are not merely endeavors; they represent concerted efforts to increase building energy efficiency and address mounting concerns regarding sustainability and resource utilization. In the intricate domain of heating, ventilation, and air conditioning (HVAC) systems, energy optimization challenges are being meticulously confronted through rigorous exploration and the application of innovative problem-solving methodologies. This pioneering study introduces groundbreaking methodologies by seamlessly integrating two state-of-the-art optimization algorithms— the Red …Fox Optimization and the Golden Eagle Optimizer— with the Decision Tree model. This fusion is aimed at enhancing the accuracy of heating load predictions and streamlining HVAC system optimization processes, marking a significant leap toward achieving heightened energy efficiency and operational efficacy in building management practices. The study emphasizes the significance of precise heating load prediction in advancing energy efficiency, realizing cost savings, and fostering environmental sustainability in building management. Furthermore, it delves into the multifaceted impact of various building features on heating load, encompassing variables such as glazing area, orientation, height, relative compactness, roof area, surface area, and wall area. These insights furnish actionable intelligence for refined decision-making processes in both building design and operation. Based on the results, the DT single model experienced the weakest performance among the three models, with R 2 = 0.975 and RMSE = 1.608. The model DTFO (DT + FOX) achieves an extraordinary R 2 value of 0.996 and RMSE value of 0.961 for heating load prediction, surpassing the performance benchmarks set by other models. This achievement holds considerable promise for aiding engineers in crafting energy-efficient buildings, particularly within the swiftly evolving landscape of smart home technologies. Show more
Keywords: Decision tree, heating load, red fox optimization, golden eagle optimizer
DOI: 10.3233/JIFS-240283
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Sriraam, Natarajan | Chinta, Babu | Suresh, Seshadhri | Sudharshan, Suresh
Article Type: Research Article
Abstract: Assessing fetal growth and development requires accurate identification of the fetal area contour and measurement of the Crown-Rump Length (CRL). In this paper, we presented a unique method for autonomously segmenting the fetal region in ultrasound images and calculating the CRL based on the U-Net architecture. Because of its capacity to capture both global and local information, the U-Net model is a popular choice for image segmentation tasks. Our method employs the U-Net model to extract the fetal region contour and measure the CRL, resulting in a dependable and efficient prenatal evaluation solution.
Keywords: Fetal, segmentation, U-Net, ultrasound image
DOI: 10.3233/JIFS-219403
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-7, 2024
Authors: Macias, Cesar | Soto, Miguel | Cardoso-Moreno, Marco A. | Calvo, Hiram
Article Type: Research Article
Abstract: Mental and cognitive well-being is of paramount significance for human beings. Consequently, the early detection of issues that may culminate in conditions such as depression holds great importance in averting adverse outcomes for individuals. Depression, a prevalent mental health disorder, can severely impact an individual’s quality of life. Timely identification and intervention are critical to prevent its progression. Our research delves into the application of Machine Learning (ML) and Deep Learning (DL) techniques to potentially facilitate the early recognition of depressive tendencies. By leveraging the cognitive triad theory, which encapsulates negative self-perception, a pessimistic outlook on the world, and a …bleak vision of the future, we aim to develop predictive models that can assist in identifying individuals at risk. In this regard, we selected The Cognitive Triad Dataset, which takes into account six different categories that encapsulate negative and positive postures about three different contexts: self context, future context and world context. Our proposal achieved great performance, by relying on a strict preprocessing analysis, which led to the models obtaining an accuracy value of 0.97 when classifying aspect contexts; 0.95 when classifying sentiment-aspects; and a value of 0.93 in accuracy was achieved under the aspect-sentiment paradigm. Our models outperformed those reported in the literature. Show more
Keywords: Cognitive triad inventory, depression detection, machine learning, deep learning, natural language processing
DOI: 10.3233/JIFS-219333
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Mundada, Shyamal | Jain, Pooja | Kumar, Nirmal
Article Type: Research Article
Abstract: Sustainable agriculture revolves around soil organic carbon (SOC), which is essential for numerous soil functions and ecological attributes. Farmers are interested in conserving and adding additional soil organic carbon to certain fields in order to improve soil health and productivity. The relationship between soil and environment that has been discovered and standardized throughout time has enhanced the progress of digital soil-mapping techniques; therefore, a variety of machine learning techniques are used to predict soil properties. Studies are thriving at how effectively each machine learning method maps and predicts SOC, especially at high spatial resolutions. To predict SOC of soil at …a 30 m resolution, four machine learning models—Random Forest, Support Vector Machine, Adaptive Boosting, and k-Nearest Neighbour were used. For model evaluation, two error metrics, namely R2 and RMSE have been used. The findings demonstrated that the calibration and validation sets’ descriptive statistics sufficiently resembled the entire set of data. The range of the calculated SOC content was 0.06 to 1.76 %. According to the findings of the study, Random Forest showed good results for both cases, i.e. evaluation using cross validation and without cross validation. Using cross validation, RF confirmed highest R2 as 0.5278 and lowest RMSE as 0.1683 for calibration dataset while without cross validation it showed R2 as 0.8612 and lowest RMSE as 0.0912 for calibration dataset. The generated soil maps will help farmers adopt precise knowledge for decisions that will increase farm productivity and provide food security through the sustainable use of nutrients and the agricultural environment. Show more
Keywords: Machine learning, remote sensing data, digital soil mapping, spatial predictions, precision farming
DOI: 10.3233/JIFS-240493
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Zheng, Danjing | Song, Xiaona | Song, Shuai | Peng, Zenglong
Article Type: Research Article
Abstract: This paper investigates an observer-based boundary controller design for interconnected nonlinear partial differential equation (PDE) systems. First, the Takagi–Sugeno (T–S) fuzzy model is adopted to accurately describe the target systems. Then, boundary measurements are employed to reduce the number of sensors. Next, considering the phenomenon of abnormal interference that may lead to measurement outliers and observer parameters’ uncertainties, an outlier-resistant non-fragile observer expressed by a saturation function is designed to guarantee the desired control objectives. Moreover, the boundary control approach is employed to trade-off the cost of system design and system performance. Furthermore, utilizing the membership function-dependent Lyapunov functions and …free-weight matrixes, sufficient conditions ensuring the closed-loop systems’ exponential stability are obtained while decreasing the conservativeness of the system stability analysis. Finally, the proposed method’s feasibility and effectiveness are validated by an example. Show more
Keywords: Boundary measurements, boundary control, interconnected nonlinear partial differential equation systems, membership function-dependent Lyapunov functions, outlier-resistant non-fragile observer
DOI: 10.3233/JIFS-238858
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Hayel, Rafa | El Hindi, Khalil | Hosny, Manar | Alharbi, Rawan
Article Type: Research Article
Abstract: Instance-Based Learning, such as the k Nearest Neighbor (kNN), offers a straightforward and effective solution for text classification. However, as a lazy learner, kNN’s performance heavily relies on the quality and quantity of training instances, often leading to time and space inefficiencies. This challenge has spurred the development of instance-reduction techniques aimed at retaining essential instances and discarding redundant ones. While such trimming optimizes computational demands, it might adversely affect classification accuracy. This study introduces the novel Selective Learning Vector Quantization (SLVQ) algorithm, specifically designed to enhance the performance of datasets reduced through such techniques. Unlike traditional LVQ algorithms that …employ random vector weights (codebook vectors), SLVQ utilizes instances selected by the reduction algorithm as the initial weight vectors. Importantly, as these instances often contain nominal values, SLVQ modifies the distances between these nominal values, rather than modifying the values themselves, aiming to improve their representation of the training set. This approach is crucial because nominal attributes are common in real-world datasets and require effective distance measures, such as the Value Difference Measure (VDM), to handle them properly. Therefore, SLVQ adjusts the VDM distances between nominal values, instead of altering the attribute values of the codebook vectors. Hence, the innovation of the SLVQ approach lies in its integration of instance reduction techniques for selecting initial codebook vectors and its effective handling of nominal attributes. Our experiments, conducted on 17 text classification datasets with four different instance reduction algorithms, confirm SLVQ’s effectiveness. It significantly enhances the kNN’s classification accuracy of reduced datasets. In our empirical study, the SLVQ method improved the performance of these datasets, achieving average classification accuracies of 82.55%, 84.07%, 78.54%, and 83.18%, compared to the average accuracies of 76.25%, 79.62%, 66.54%, and 78.19% achieved by non-fine-tuned datasets, respectively. Show more
Keywords: Machine learning, instance based learning, learning vector quantization, k-nearest neighbor, value difference metric (VDM)
DOI: 10.3233/JIFS-235290
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Lu, Yang | Liu, Fengjun | Cao, Bin
Article Type: Research Article
Abstract: English text analysis is required for quantitative grammar, phrase, and word assessment to improve its usage in conversation, drafting, etc. In particular, a teaching system requires the flawless and precise use of English words, phrases, and sentences for fundamental and knowledge-based learning. Data integration and interoperability, data volume, and data variety pose difficulties for text data analytics. This article discusses a heterogeneous English teaching system text analysis solution that integrates a Genetic Algorithm (GA) and Deep Learning (DL). The Text Analytical Model (TAM) uses fused methods (FM) to handle words and their placement for sentence framing. The framed teaching sentence …is analyzed lexically for its precision and meaning with conventional features. Initially, the possible word combinations using the crossover and mutation operations of the genetic process are performed. The outcome of the genetic process forecasts different possible sentence combinations for delivering the English context to students. The mutation process identifies the most precise lexical sentence that fits the subject and context. Based on precision, the DL model is trained to reduce the initial population of the GA process; this is achieved in English teaching through repetitions or drilling performed for different sentences and words. The learning converges towards precision in delivering context-based words and sentences by reducing unnecessary crossovers in the genetic process to reduce computational complexity. This feature, therefore, achieves high-precision convergence with less computation time compared to methods of the same kind. TAM-FM improves the precision convergence, forecast probability, and population refinement by 9.5%, 11.39%, and 8.81%, respectively. TAM-FM reduces the computation time and complexity by 9.67% and 8.3%, respectively. Show more
Keywords: Convergence, deep learning, English teaching, genetic algorithm, text analysis
DOI: 10.3233/JIFS-236249
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Reka, S | Karthik Sainadh Reddy, Dwarampudi | Dhiraj, Inti | Suriya Praba, T
Article Type: Research Article
Abstract: Polycystic Ovary Syndrome (PCOS) is a hormonal condition that typically affects female during the time of their reproduction. It is identified by the disruptions in hormonal balance, particularly an increase in levels of androgen (male hormone) in the female body. PCOS can lead to various symptoms and health complications including irregular menstrual cycles, ovarian cysts, fertility issues, insulin resistance, weight gain, acne, and excess hair growth. The real-world PCOS detection is a challenging task whilst PCOS specific cause is unknown and its symptoms are unclear. Thus, accurate and timely diagnosis of PCOS is crucial for effective management and prevention of …long-term complications. In such cases, Machine learning based PCOS prediction model support diagnostic process, address potential errors and time constraints. Machine learning algorithms can analyze large set of patient data, including medical history, hormonal profiles, and imaging results, to assist in the diagnosis of PCOS. In particular, the performance of data analysis chore and prediction model is improved by ensemble feature selection strategies. These methods concentrate on selecting a subset of pertinent features from a broader range of features. The unstable nature of the outcome of feature selection algorithm is a frequent issue in practical applications, when it is applied multiple times on similar dataset or with slight modifications in the data. Thus, evaluating the robustness of feature selection algorithm is most important. To address these issues and quantify the robustness, this study uses Jenson-Shannon divergence, an information theoretic approach with ensemble feature selection method to handle the various findings, such as complete ranking, half ranking and top-k lists (without ranking). Furthermore, this article proposes a hybrid machine learning classifier with SMOTE – SVM for the prompt detection of PCOS and the performance of the model is compared with a number of other individual classifiers including KNN (K-Nearest Neighbour), Support Vector Machine (SVM), AdaBoost, LR –Logistic Regression, NB –Nave Bayes, RF –Random Forest, Decision Tree. The proposed SWISS-AdaBoost classifier surpassed other models with 97.81% of accuracy and AUC of 99.08%. Show more
Keywords: Polycystic ovary syndrome (PCOS), Jenson-shannon divergence, SVM (Support Vector Machine), K-nearest neighbour, logistic regression, decision tree, naive bayes and AdaBoost
DOI: 10.3233/JIFS-219402
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Ezhilarasie, R. | MohanRaj, I. | Ramakrishnan, Thiruvikram Gopichettipalayam | Madhavan, Vyas | Narayan, Keshav | Umamakeswari, A.
Article Type: Research Article
Abstract: Internet of Things (IoT) devices are major stakeholders of contemporary network bandwidth. The proliferation of IoT devices and the demand for latency-free communication in time-critical applications has proven the drawback of cloud-based solutions. Edge computing is an paradigm that reduces the application’s response time by utilizing computation and storage proximate to each devices. Privacy in cloud computing is attained by system virtualization, containerization, among other evolved technologies. As privacy remains a primary concern, there is a need to test the feasibility of resource-constrained edge devices. Hence, this work aimed to examine the usability of such devices in edge computing by …benchmarking on different runtime environments. The results reveal that a standard mechanism was achieved for defining the criteria to identify the suitable edge devices for computation offloading, particularly for a set of smart traffic surveillance use cases. Further, an optimization algorithm was designed to generate an optimum schedule that decides the best device to execute a particular task from the set of suitable edge devices to enhance energy and execution time in a global view. Based on the feasibility study and optimal schedule, a makespan that is nearly 11 times better than local execution for the considered traffic surveillance workflow was achieved. Show more
Keywords: Container, docker, edge computing, IoT, LXC, offloading, single board computer
DOI: 10.3233/JIFS-219424
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Bukya, Hanumanthu | Bhukya, Raghuram | Harshavardhan, A.
Article Type: Research Article
Abstract: Fog computing has several undeniable benefits, such as enhancing near-real-time response, reducing transmission costs, and facilitating IoT analysis. This technology is poised to have a significant impact on businesses, organizations, and our daily lives. However, mobile user equipment struggles to handle the complex computing tasks associated with modern applications due to its limited processing power and battery life. Edge computing has emerged as a solution to this problem by relocating processing to nodes at the network’s periphery, which have more computational capacity. With the rapid evolution of wireless technologies and infrastructure, edge computing has become increasingly popular. Nevertheless, managing fog …computing resources remains challenging due to resource constraints, heterogeneity, and distant nodes. For delay-sensitive intelligent IoT applications within the fog computing architecture, cooperation and communication processing resources in 6 G and future networks are essential. This study proposes a joint computational and optimized resource allocation (JCORA) technique to accelerate the processing of data from intelligent IoT sensors in a cell association environment. The proposed technique utilizes an uplink and downlink power allocation factor and the shortest job first (SJF) task scheduling system to optimize user fairness and decrease data processing time. This is a complex assignment due to several non-convex limitations. The suggested JCORA-SJF model simultaneously optimizes time partitioning, computing task processing mode selection, and target sensing location selection to maximize the weighted total of task processing and communication performance. The simulation results demonstrate the effectiveness of the proposed JCORA-SJF algorithms, and the system’s scalability is also examined. Show more
Keywords: Fog computing, Internet of Things (IoT), resource allocation, edge computing networks, optimized resource allocation (JCORA), shortest job first (SJF)
DOI: 10.3233/JIFS-219421
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Singh, Pardeep | Singh, Monika | Singh, Nitin Kumar | Das, Prativa | Chand, Satish
Article Type: Research Article
Abstract: Social media platforms play vital roles in disseminating information during crisis situations. Many rescue agencies, media outlets, and volunteers regularly monitor this data to identify and analyze disasters, ultimately mitigating life risks. However, effectively categorizing these messages based on information types is crucial for enhancing the situational awareness of emergency responders. This paper addresses the challenge of analyzing informal crisis-related social media texts by classifying disaster event tweets into 10 humanitarian categories associated with 19 major natural disaster events. We fine-tune seven state-of-the-art pre-trained transformer models and compare their performance with the recently introduced domain-specific models, i.e., CrisisTransformers. We empirically …found that CrisisTransformers outperform seven strong baseline transformer models in classifying disaster-specific tweets from the HumAID dataset, achieving a macro-averaged F1 score of 0.77. Our work contributes to the crisis computing field by improving the classification of disaster-related tweets and enhancing the capabilities of emergency responders and disaster management organizations. Show more
Keywords: Transformers, crisis computing, disaster classification, Twitter, disaster response
DOI: 10.3233/JIFS-219419
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-10, 2024
Authors: Muppavarapu, Vamsee | Ramesh, Gowtham
Article Type: Research Article
Abstract: The W3C linked building data group is working on modeling the information for integrating building information with building life cycle data using Semantic Web technologies. The community has proposed a set of semantic models such as ifcOWL and Building Topology Ontology (BOT), to model various applications across Architecture, Engineering, Construction, and Operation (AECO) domain. On the other hand, the Semantic Web of Things (SWoT) group proposed standard semantic models such as M3-lite and BOSH ontologies for describing the sensor networks, observations, and sensor measurements. Both the aforementioned domains have their own siloed applications and with the evolution of the smart …home domain, there is a need to combine the knowledge of building information with the sensor knowledge to develop cross-domain applications. However, in order to develop such downstream applications leveraging advantages from both domains requires interoperable knowledge. This paper proposes an interoperable ontology, Building Topology Ontology for Smart Homes (BOTSH), with the aim of aligning the building domain with sensors domain semantic models. The BOTSH ontology facilitates capturing knowledge from both domains and helps in developing cross-domain applications. The potential of the proposed model was demonstrated using a real-life building model based on the competency questions framed by the domain experts. Show more
Keywords: Semantic web of things, building information models, building topology, sensors and observations, smart homes, knowledge graphs, semantic applications
DOI: 10.3233/JIFS-219425
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Pillai, Leena G. | Muhammad Noorul Mubarak, D. | Sherly, Elizabeth
Article Type: Research Article
Abstract: Speech production is a complex sequential process which involve the coordination of various articulatory features. Among them tongue being a highly versatile active articulator responsible for shaping airflow to produce targeted speech sounds that are intellectual, clear, and distinct. This paper presents a novel approach for predicting tongue and lip articulatory features involved in a given speech acoustics using a stacked Bidirectional Long Short-Term Memory (BiLSTM) architecture, combined with a one-dimensional Convolutional Neural Network (CNN) for post-processing with fixed weights initialization. The proposed network is trained with two datasets consisting of simultaneously recorded speech and Electromagnetic Articulography (EMA) datasets, each …introducing variations in terms of geographical origin, linguistic characteristics, phonetic diversity, and recording equipment. The performance of the model is assessed in Speaker Dependent (SD), Speaker Independent (SI), corpus dependent (CD) and cross corpus (CC) modes. Experimental results indicate that the proposed model with fixed weights approach outperformed the adaptive weights initialization with in relatively minimal number of training epochs. These findings contribute to the development of robust and efficient models for articulatory feature prediction, paving the way for advancements in speech production research and applications. Show more
Keywords: Acoustic-to-articulatory inversion, smoothing techniques, articulatory features, weight initialization, bidirectional long short-term memory
DOI: 10.3233/JIFS-219386
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Sheshadri, Shailashree K. | Gupta, Deepa
Article Type: Research Article
Abstract: Non-Autoregressive Machine Translation (NAT) represents a groundbreaking advancement in Machine Translation, enabling the simultaneous prediction of output tokens and significantly boosting translation speeds compared to traditional auto-regressive (AR) models. Recent NAT models have adeptly balanced translation quality and speed, surpassing their AR counterparts. The widely employed Knowledge Distillation (KD) technique in NAT involves generating training data from pre-trained AR models, enhancing NAT model performance. While KD has consistently proven its empirical effectiveness and substantial accuracy gains in NAT models, its potential within Indic languages has yet to be explored. This study pioneers the evaluation of NAT model performance for Indic …languages, focusing mainly on Kashmiri to English translation. Our exploration encompasses varying encoder and decoder layers and fine-tuning hyper-parameters, shedding light on the vital role KD plays in facilitating NAT models to capture variations in output data effectively. Our NAT models, enhanced with KD, exhibit sacreBLEU scores ranging from 16.20 to 22.20. The Insertion Transformer reaches a SacreBLEU of 22.93, approaching AR model performance. Show more
Keywords: Neural machine translation, auto-regressive translation, non-autoregressive translation, Levenshtein Transformer, insertion transformer, knowledge distillation
DOI: 10.3233/JIFS-219383
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Bai, Xiaojun | Jia, Haiyang | Fu, Yanfang | Ji, Yu | Li, Suyang
Article Type: Research Article
Abstract: Predicting the remaining life of aircraft engines is paramount in aviation maintenance management. It helps formulate maintenance schedules, reduce maintenance expenses, and enhance flight safety. Traditional methods for predicting the remaining life of an engine suffer from significant errors and limited generalization capabilities. This paper introduces a predictive model based on Long Short-Term Memory (LSTM) networks and Feedforward Neural Networks (FNN) to improve prediction accuracy. Furthermore, the model’s hyperparameters undergo optimization using the Gannet Optimization Algorithm (GOA). Leveraging the N-CMAPSS dataset for prediction and transfer learning experiments, the results highlight the significant advantages of the proposed model in forecasting the …remaining life of aircraft engines. When subjected to training and testing on the DS02 equipment dataset, the root mean square error (RMSE) registers at 5.04. At that time, the score function reached a value of 1.39, surpassing the performance of current state-of-the-art prediction methods. Additionally, in terms of its transfer learning capabilities, the model demonstrates minimal fluctuations in RMSE when applied directly to datasets of various other engine models. It consistently maintains a high level of predictive accuracy. Show more
Keywords: Remaining life prediction, N-CMAPSS dataset, long short-term memory network, Gannet Optimization Algorithm (GOA)
DOI: 10.3233/JIFS-236225
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Anbumani, A. | Jayanthi, P.
Article Type: Research Article
Abstract: GLOBOCAN 2020 states that, after lung cancer, breast cancer is the most common cancer worldwide, affecting many women [1 ]. AI-based computer-assisted detection/diagnosis techniques can assist radiologists in diagnosing breast cancer earlier. Mammography is one of the most widely used and effective methods for detecting and treating breast cancer. This research proposes a customised deep-learning model for breast cancer categorization. To effectively categorise the breast cancer mammography image, two customised CNN models are proposed. Three real-time datasets such as MIAS, CBIS-DDSM, and INbreast were used to evaluate the efficacy of the proposed categorization strategy. The results show that the proposed …method effectively classifies the image and obtains 98.78%, 97.84% and 96.92% accuracy for the datasets MIAS, INbreast and CBIS-DDSM. Show more
Keywords: Breast cancer, CNN, deep learning, mammography, classification
DOI: 10.3233/JIFS-232896
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Cruz, Elsy | Santos, Lourdes | Calvo, Hiram | Anzueto-Rios, Álvaro | Villuendas-Rey, Yenny
Article Type: Research Article
Abstract: In recent years, multiple studies have highlighted the growing correlation between breast density and the risk of developing breast cancer. In this research, the performance of two convolutional neural network architectures, VGG16 and VGG19, was evaluated for breast density classification across three distinct scenarios aimed to compare the masking effect on the models performance. These scenarios encompass both binary classification (fatty and dense) and multi-class classification based on the BI-RADS categorization, utilizing a subset of the ABC-Digital Mammography Dataset. In the first experiment, focusing on cases with no masses, VGG16 achieved an accuracy of 93.33% and 90.00% for two and …four-class classification. The second experiment, which involved cases with benign masses, yielded a remarkable accuracy of 95.83% and 93.33% with VGG16, respectively. In the third and last experiment, an accuracy of 88.00% was obtained using VGG16 for the two-class classification, while VGG19 delivered an accuracy of 93.33% for the four-class classification. These findings underscore the potential of deep learning models in enhancing breast density classification, with implications for breast cancer risk assessment and early detection. Show more
Keywords: Mammography, breast tissue density, convolutional neural networks
DOI: 10.3233/JIFS-219378
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-9, 2024
Authors: Zheng, Z. | Gao, J.B. | Weng, Z.
Article Type: Research Article
Abstract: The body size parameter of cattle is an important index reflecting the growth and development and health condition of cattle. The traditional manual contact measurement is not only a large workload and difficult to measure, but also prone to problems such as affecting the normal life habits of cattle. In this paper, we address this problem by proposing a contactless body size measurement method for cattle based on machine vision. Firstly, the cattle is confined to a fixed space using a position-limiting device, and images of the body of the cattle are taken from three directions: top, left, and right, …using multiple cameras. Secondly, the image is segmented using a fuzzy clustering algorithm based on neighborhood adaptive local spatial information improvement, and the image is processed to extract the contour images of the top view and side view. The key points of body measurements were extracted using interval division and curvature calculation for the side view images, and the key point information was extracted using skeleton extraction and pruning for the top view images, which realized the measurements of body height(BH), rump height(RH), body slanting length(BSL), and abdominal circumference(AC) parameters of the cattle. The correlation between body size and weight data obtained by contactless methods was investigated and the modeled using one-factor linear regression, one-factor nonlinear regression, multivariate stepwise regression, RBF network fitting, BP neural network fitting, support vector machine, and particle swarm optimization-based support vector machine methods, respectively. Information on body size parameters was collected from 137 cattles, and the results showed that the maximum errors between the measured and actual values of BH, RH, BSL and AC were 5.0%, 4.4%, 3.6%, and 5.5%, respectively. Correlation of BH, RH, BSL and AC with weight obtained by non-contact methods was > 0.75. The BH parameter can be selected in the single-factor growth monitoring. The multi-body scale can reflect the growth status of cattle more comprehensively, in which RH, BSL and AC are important detection parameter; the multi-factor nonlinear model can reflect the growth characteristics of cattle more comprehensively. The contactless measurement method proposed in the paper can effectively improve the work efficiency and reduce the stress reaction of cattle, which is a long-term and effective monitoring method, and is of great significance in promoting accurate and welfare cattle rearing. Show more
Keywords: Image processing, body size measurement, fuzzy clustering, non-contact measurement, cattle weight estimation
DOI: 10.3233/JIFS-238016
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Vidhya, S.S. | Mathi, Senthilkumar | Anantha Narayanan, V. | Neelakanta Iyer, Ganesh
Article Type: Research Article
Abstract: The Internet of Things lies in establishing low-power and lossy networks created by interconnecting many wireless devices with limited resources. Fascinatingly, an IPv6 routing protocol for low-power and lossy networks has become a common practice for these applications. Even though this protocol addresses the challenges of low-power networks, many issues concerning the quality of service and energy consumption are open to the research community. The protocol relies on a destination-oriented directed acyclic graph, and the root selection depends on some constraints and metrics associated with an objective function (OF). The conventional OFs select parents based on a single metric, such …as the expected transmission count or the number of nodes to travel. The current paper proposes an enhancement to the OF metric, aiming to decrease node energy and enhance the quality of service. This improvement is achieved by the factors, including the received signal strength indicator, node distance, power, link quality indicator, and expected transmission count, to select reliable communication links. The minimum power needed for reliable communication is predicted from the received signal strength indicator, node distance, receiver power, and link quality indicator using a nonlinear support vector machine. The OF value of the candidate node is computed from the power level and expected transmission count combined using the Takagi-Sugeno fuzzy model. The proposed OF is implemented in the Cooja simulator and compared against minimum rank with hysteresis OF and OF zero. A considerable improvement in the packet delivery ratio and a 37.5% reduction in energy consumption is obtained. Show more
Keywords: Classification, fuzzification, power prediction, received signal strength indicator, transmission power, link quality indicator, low power networks, TSK fuzzy model
DOI: 10.3233/JIFS-219420
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-11, 2024
Authors: Mathi, Senthilkumar | Ramalingam, Venkadeshan | Sree Keerthi, Angara Venkata | Abhirup, Kothamasu Ganga | Sreejith, K. | Dharuman, Lavanya
Article Type: Research Article
Abstract: Long-term evolution in wireless broadband communication aims to provide secure communication for users and a high data rate for a fourth-generation network. Even though the fourth-generation network provides security, some loopholes lead to several attacks on the fourth-generation network attacks. The denial-of-service attack occurs when the user communicates with a rogue base station, and the radio base station in fourth-generation long-term evolution networks ensures that the user is attached to the rogue node assigned network. The location leak attack occurs when the packets are sniffed to find any user’s location using its temporary mobile subscriber identity. Prevention of rogue base …station and location leak attacks helps the system achieve secure communication between the participating entities. Earlier works in long-term evolution mobility management do not address preventing attacks such as denial-of-service, rogue base stations and location leaks and suffer from computational costs while providing security features. Hence, the present paper addresses the vulnerability of these attacks. It also investigates how these attacks occur and exposes communication in the fourth-generation network. To mitigate these vulnerabilities, the paper proposes a novel authentication scheme. The proposed scheme is simulated using Network Simulator 3, and the security analysis of the proposed scheme is shown using AVISPA –a security tool. Numerical analysis demonstrates that the proposed scheme significantly reduces communication overhead and computational costs associated with the fourth-generation long-term evolution authentication mechanism. Show more
Keywords: Authentication, long-term evolution, denial-of-service, attack, location leak, confidentiality
DOI: 10.3233/JIFS-219406
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-10, 2024
Authors: Zheng, Lina | Wang, Yini | Wang, Sichun
Article Type: Research Article
Abstract: Due to the relatively high cost of labeling data, only a fraction of the available data is typically labeled in machine learning. Some existing research handled attribute selection for partially labeled data by using the importance of an attribute subset or uncertainty measure (UM). Nevertheless, it overlooked the missing rate of labels or the choice of the UM with optimal performance. This study uses discernibility relation and the missing rate of labels to UM for partially labeled data and applies it to attribute selection. To begin with, a decision information system for partially labeled data (pl-DIS) can be used to …induce two equivalent decision information systems (DISs): a DIS is constructed for labeled data (l-DIS), and separately, another DIS is constructed for unlabeled data (ul-DIS). Subsequently, a discernibility relation and the percentage of missing labels are introduced. Afterwards, four importance of attribute subset are identified by taking into account the discernibility relation and the missing rate of labels. The sum of their importance, which is determined by the label missing rates of two DISs, is calculated by weighting each of them and adding them together. These four importance may be seen as four UMs. In addition, numerical simulations and statistical analyses are carried out to showcase the effectiveness of four UMs. In the end, as its application for UM, the UM with optimal performance is used to attribute selection for partially labeled data and the corresponding algorithm is proposed. The experimental outcomes demonstrate the excellence of the proposed algorithm. Show more
Keywords: Partially labeled data, pl-DIS, uncertainty measure, attribute selection, the missing rate of labels, discernibility relation
DOI: 10.3233/JIFS-240581
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Rao, Vishisht Srihari | Vinay, P. | Uma, D.
Article Type: Research Article
Abstract: A hazy image is characterized by atmospheric conditions that reduce the image’s clarity and contrast, thereby making it less visible. This degradation in image quality can hinder the performance of advanced computer vision tasks such as object detection and identifying open spaces which need to perform with high accuracy in important real world applications such as security surveillance and autonomous driving. In the recent past, the use of deep learning in image processing tasks have shown a remarkable improvement in performance, in particular, Convolutional Neural Networks (CNNs) perform superior to any other type of neural network in image related tasks. …In this paper, we propose the addition of Channel Attention and Pixel Attention layers to four state-of-the-art CNNs, namely, GMAN, U-Net, 123-CEDH and DMPHN, used for the task of image dehazing. We show that the addition of these layers yields a non-trivial improvement on the quality of the dehazed images which we show qualitatively with examples and quantitatively by obtaining PSNR and SSIM scores of 28.63 and 0.959 respectively. Through the experiments, we show that the addition of the mentioned attention layers to the GMAN architecture yields the best results. Show more
Keywords: Dehazing, deep neural network, convolutional neural network, attention
DOI: 10.3233/JIFS-219391
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Agrawalla, Bikash | Shukla, Alok Kumar | Tripathi, Diwakar | Singh, Koushlendra Kumar | Ramachandra Reddy, B.
Article Type: Research Article
Abstract: Software fault prediction, which aims to find and fix probable flaws before they appear in real-world settings, is an essential component of software quality assurance. This article provides a thorough analysis of the use of feature ranking algorithms for successful software failure prediction. In order to choose and prioritise the software metrics or qualities most important to fault prediction models, feature ranking approaches are essential. The proposed focus on applying an ensemble feature ranking algorithm to a specific software fault dataset, addressing the challenge posed by the dataset’s high dimensionality. In this extensive study, we examined the effectiveness of multiple …machine learning classifiers on six different software projects: jedit, ivy, prop, xerces, tomcat, and poi, utilising feature selection strategies. In order to evaluate classifier performance under two scenarios—one with the top 10 features and another with the top 15 features—our study sought to determine the most relevant features for each project. SVM consistently performed well across the six datasets, achieving noteworthy results like 98.74% accuracy on “jedit” (top 10 features) and 91.88% on “tomcat” (top 10 features). Random Forest achieving 89.20% accuracy on the top 15 features, on “ivy.” In contrast, NB repeatedly recording the lowest accuracy rates, such as 51.58% on “poi” and 50.45% on “xerces” (the top 15 features). These findings highlight SVM and RF as the top performers, whereas NB was consistently the least successful classifier. The findings suggest that the choice of feature ranking algorithm has a substantial impact on the fault prediction models’ predictive accuracy and effectiveness. When using various ranking systems, the research also analyses the trade-offs between computing complexity and forecast accuracy. Show more
Keywords: Software fault prediction, ensemble techniques, feature ranking, random forests, support vector machine
DOI: 10.3233/JIFS-219431
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Su, Xue | Chen, Lijun
Article Type: Research Article
Abstract: Incomplete real-valued data often misses some labels due to the high cost of labeling data. This paper investigates for partially labeled incomplete real-valued data and considers its application in semi-supervised attribute reduction. There are two decision information systems (DISs) in a partially labeled incomplete real-valued data DIS (p-IRVDIS): a labeled incomplete real-valued data DIS (l-IRVDIS) and a unlabeled incomplete real-valued data DIS (u-IRVDIS). The degree of importance on an attribute subset in a p-IRVDIS are defined using an indistinguishable relation and conditional information entropy. It is the weighted sum of l-IRVDIS and u-IRVDIS using the missing rate of label to …measure p-IRVDIS uncertainty. Based on the degree of importance, an adaptive semi-supervised attribute reduction algorithm in a p-IRVDIS is proposed. This algorithm can automatically adapt to various missing rates of label. The experimental results on 8 datasets reveal that the proposed algorithm performs statistically better than some state-of-the-art algorithms. Show more
Keywords: p-IRVDIS, the degree of importance, semi-supervised attribute reduction, indiscernibility relation, conditional information entropy
DOI: 10.3233/JIFS-239559
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Tahir Kidwai, Umar | Akhtar, Nadeem | Nadeem, Mohammad | Alroobaea, Roobaea Salim
Article Type: Research Article
Abstract: In recent years, the surge in online content has necessitated the development of intelligent recommender systems capable of offering personalized suggestions to users. However, these systems often encapsulate users within a “filter bubble”, limiting their exposure to a narrow range of content. This study introduces a novel approach to address this issue by integrating a novel diversity module into a knowledge graph-based explainable recommender system. Utilizing the Movie Lens 1M dataset, this research pioneers in fostering a more nuanced and transparent user experience, thereby enhancing user trust and broadening the spectrum of recommendations. Looking ahead, we aim to further refine …this system by incorporating an explicit feedback loop and leveraging Natural Language Processing (NLP) techniques to provide users with insightful explanations of recommendations, including a comprehensive analysis of filter bubbles. This initiative marks a significant stride towards creating a more inclusive and informed recommendation landscape, promising users not only a wider array of content but also a deeper understanding of the recommendation mechanisms at play. Show more
Keywords: Recommender system, explainable recommendations, filter bubble, knowledge graph, diversity
DOI: 10.3233/JIFS-219416
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Li, Xin | Hao, Miao | Ru, Changhai | Wang, Yong | Zhu, Junhui
Article Type: Research Article
Abstract: With the development of science and technology, people have higher and higher requirements for robots. The application of robots in industrial production is also increasing, and there are more applications in people’s lives. Therefore, robots must have a better ability to receive and process the external environment. Therefore, visual servo system appears. Pose estimation is a major problem in the current vision system. It has great application value in positioning and navigation, target tracking and recognition, virtual reality and motion estimation. Therefore, this paper put forward the research of robot arm pose estimation and control based on machine vision. This …paper first analyzed the technology of machine vision, and then carried out experiments. The accuracy and stability of the two methods for robot arm pose estimation were compared. The experimental results showed that when the noise of Kalman’s centralized data fusion method was 1 pixel, the maximum error of the X-axis angle was only 0.55, and the average error was 0.02. In Kalman’s distributed data fusion method, the average error of X-axis displacement was 0.06, and the maximum value was 17.66. In terms of accuracy, Kalman’s centralized data fusion method was better. In terms of stability, Kalman’s centralized data fusion method was also better. However, in general, these two methods had very good results, and could accurately control the position and posture of the manipulator. Show more
Keywords: Position and attitude estimation of manipulator, machine vision, kalman filter, world coordinate system
DOI: 10.3233/JIFS-237904
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Wang, Wei | Xu, Dehao | Lv, Jing | Rong, Jian | He, Donggang | Li, Shuangshuang
Article Type: Research Article
Abstract: The factors of water quality in the intensive marine stichopus japonicus aquaculture process are changing with seasons, so water temperature, salinity, pH value and nitrite were selected as auxiliary variables to measure the concentration of ammonia nitrogen. FCM (Fuzzy C-means) algorithm was adopted to classify them. Based on the EM (Expectation Maximization) algorithm, fuzzy sub-models of ammonia nitrogen concentration were constructed around each operating point, and finally the fuzzy sub-models were combined according to the posterior distribution of the characteristics of the sampling data. Based on the data collected at Xinyulong Marine Biological Seed Technology Co., Ltd, in Dalian China, …the ammonia nitrogen concentration prediction model was tested and verified. Show more
Keywords: Water quality, stichopus japonicus, expectation maximization, multi-model
DOI: 10.3233/JIFS-239032
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Shuangyuan, Li | Qichang, Li | Mengfan, Li | Yanchang, Lv
Article Type: Research Article
Abstract: With the development of information technology, the number and methods of cyber attacks continue to increase, making network security issues increasingly important. Intrusion detection has become a vital means of dealing with cyber threats. Current intrusion detection methods predominantly rely on machine learning. However, machine learning suffers from limitations in detection capability and the requirement for extensive feature engineering. Additionally, current intrusion detection datasets face the challenge of data imbalance. To address these challenges, this paper proposes a novel solution leveraging Generative Adversarial Networks (GANs) to balance the dataset and introduces an attention mechanism into the generator to efficiently extract …key feature information, the mechanism can effectively sort the key information of the data and quickly capture important features. Subsequently, a combination of 1D Convolutional Neural Networks (1DCNN) and Bidirectional Gated Recurrent Units (BiGRU) is employed to construct a classification model capable of extracting both spatial and temporal features. Furthermore, Particle Swarm Optimization (PSO) is utilized to optimize the input weights and hidden biases of the model, so as to further improve the accuracy and robustness of the model. Finally, the model is trained and implemented for network intrusion detection. To demonstrate the applicability of the model, experiments were conducted using the NSL-KDD dataset and the UNSW-NB15 dataset. The final results showed that the proposed model outperformed other models, achieving accuracies of 99.15% and 97.33% on the respective datasets. This indicates that the model improves the efficiency of network intrusion detection and better ensures the effectiveness of network security. Show more
Keywords: Intrusion detection, GAN, 1DCNN, BiGRU, PSO
DOI: 10.3233/JIFS-236285
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Liu, Xia | Zhang, Xianyong | Chen, Jiaxin | Chen, Benwei
Article Type: Research Article
Abstract: Attribute reduction is an important method in data analysis and machine learning, and it usually relies on algebraic and informational measures. However, few existing informational measures have considered the relative information of decision class cardinality, and the fusion application of algebraic and informational measures is also limited, especially in attribute reductions for interval-valued data. In interval-valued decision systems, this paper presents a coverage-credibility-based condition entropy and an improved rough decision entropy, further establishes corresponding attribute reduction algorithms for optimization and applicability. Firstly, the concepts of interval credibility, coverage and coverage-credibility are proposed, and thus, an improved condition entropy is defined …by virtue of the integrated coverage-credibility. Secondly, the fused rough decision entropy is constructed by the fusion of improved condition entropy and roughness degree. By introducing the coverage-credibility, the proposed uncertainty measurements enhance the relative information of decision classes. In addition, the nonmonotonicity of the improved condition entropy and rough decision entropy is validated by theoretical proofs and experimental counterexamples, with respect to attribute subsets and thresholds. Then, the two rough decision entropies drive monotonic and nonmonotonic attribute reductions, and the corresponding reduction algorithms are designed for heuristic searches. Finally, data experiments not only verify the effectiveness and improvements of the proposed uncertainty measurements, but also illustrate the reduction algorithms optimization through better classification accuracy than four comparative algorithms. Show more
Keywords: Rough sets, Attribute reduction, Interval-valued decision systems, Algebraic measures and informational measures, Coverage-credibility-based rough decision entropy
DOI: 10.3233/JIFS-239544
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Li, Zexin | Li, Qiulin | Li, Zepeng | Huang, Lixia | Pu, Song | Luo, Zunhao
Article Type: Research Article
Abstract: Tourist attraction recommendation (TAR) problem has gained attention due to its potential to enhance tourist services. Existing studies focus on meeting tourists’ individual needs, but overlook the tour operator’s interests as the TAR service provider. The TAR problem is more challenging due to the high variability of customer demand, which is difficult to predict accurately beforehand. This paper examines TAR in response to random changes in tourist demand, aiming to minimize transportation costs, cooperation expenses between tour operators and attractions, ticket booking fees, and promotion costs, where ambiguity set is defined by means, mean absolute deviations, and the support set. …Firstly a distributionally robust model is proposed to identify suitable attractions for cooperation, along with determining the associated costs of ticket booking, promotion, and tourist transportation, while considering chance constraint on the service level. Subsequently, the model is reformulated into a tractable mixed integer linear programming model using duality theory. Numerical experiments illustrate that the proposed model outperforms both the stochastic programming model and the deterministic model in terms of risk level by out-of-sample test. In particularly, considering uncertainty and distributional ambiguity can make the model more accurate and credible. Show more
Keywords: Attraction recommendation, distributionally robust optimization, demand uncertainty
DOI: 10.3233/JIFS-238169
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Tian, Wen | Zhang, Yining | Fang, Qin | Liu, Weidong
Article Type: Research Article
Abstract: In order to solve the problem of imbalance between traffic demand and airspace capacity of high-altitude air route network, reduce unnecessary delay costs, and improve air route operation efficiency, the resource allocation problem of multi-objective air route network for CTOP program is studied. Taking the affected flights in the congested area of air routes as the research object, taking into account the constraints of actual flight operation, FCA time slot resource availability limit, FCA capacity limit, etc., aiming at minimizing the total delay time of each flight and maximizing the fairness of airlines, a multi-objective optimization model for air route …network resource allocation is established, and an improved NSGA-II algorithm is designed to solve the model. Based on the actual operation data of air routes in East China, the Pareto optimal solution set is obtained and compared with the traditional RBS algorithm, the average delay time is reduced by 5.49% and the average fair loss degree is reduced by 66.76%. The results show that the proposed multi-objective optimization model and the improved NSGA-II algorithm have better performance, which can take into account the fairness of each airline on the basis of reducing the total delay cost, realize the allocation of optimal flight trajectories and time slot resources, and provide a reference scheme for air traffic control resource scheduling. Show more
Keywords: Air traffic flow management, resource allocation, collaborative trajectory options program (CTOP), multi-objective optimization, genetic algorithm
DOI: 10.3233/JIFS-233588
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Velusamy, Saravanan | Murugan, Pallikonda Rajasekaran | Vishnuvarthanan, G. | Thiyagarajan, Arunprasath | Ramaraj, Kottaimalai | Kamalakkannan, Vidyavathi
Article Type: Research Article
Abstract: Due to the advantages of Electrocardiogram (ECG) signals, which are challenging to replicate yet easy to get, ECG-based identification has become a new path in biometric recognition research. These classic feature extraction techniques require Hand-crafted or feature-specific implications. The methods used for selection and integration of features, are time-consuming. The main objective of this study is develop deep learning approach to study the features of ECG data digital characteristics, thus saving a lot of signal pre-processing steps. This research proposed novel technique in X-wave recognition of ECG signal using max-min threshold technique and classification of ECG signal. This signal has …been processed for noise removal and normalization. Then this processed signal has been used to recognize X-wave from ECG signal. From recognized X-wave, the ECG signal has been classified using Improved Support Vector Machine (ISVM). The QRS complex has been detected using Stacked Auto-Encoder with Neural Networks (SAENN). The study took raw ECG signals and entropy-based features evaluated from extracted QRS complexes. Exams are based on classifying heart disorders into two, five, and twenty classes. The experimental findings showed that our suggested model attained a high classification accuracy of 97%, precision of 89%, recall of 90%, F-1 score of 88%. Show more
Keywords: Electrocardiogram, X-wave recognition, QRS complex, cross-validation, entropy-based features, classification
DOI: 10.3233/JIFS-241456
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-11, 2024
Authors: Gong, Zengtai | Zhang, Yuanyuan
Article Type: Research Article
Abstract: In this paper, we focus on generalized fuzzy complex numbers and propose a straightforward matrix method to solve the dual rectangular fuzzy complex matrix equations C · Z ˜ + L ˜ = R · Z ˜ + W ˜ , in which C and R are crisp complex matrices and Z ˜ , L ˜ and M ˜ …are fuzzy complex number matrices. The existing methods for solving fuzzy complex matrix equations involve separately calculating the extended solution and the corresponding parameters of the real and imaginary parts, whereby we obtain the algebraic solution of the equations. By means of the interval arithmetic and embedding approach, the n × n dual rectangular fuzzy complex linear systems could be converted into 2n × 2n fuzzy linear systems, which are also equivalent to the 4n × 4n real linear systems. By directly solving the 4n × 4n real linear systems, the algebraic solutions can be obtained. The general dual rectangular fuzzy complex matrix equations and dual rectangular fuzzy complex linear systems are investigated by the generalized inverses of matrices. Finally, some examples are given to illustrate the effectiveness of method. Show more
Keywords: Fuzzy number, fuzzy complex number, rectangular fuzzy complex number, dual rectangular fuzzy complex matrix equations
DOI: 10.3233/JIFS-239305
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-21, 2024
Authors: Aguilar-Canto, Fernando | Luján-García, Juan Eduardo | Espinosa-Juárez, Alberto | Calvo, Hiram
Article Type: Research Article
Abstract: Inferring phylogenetic trees in human populations is a challenging task that has traditionally relied on genetic, linguistic, and geographic data. In this study, we explore the application of Deep Learning and facial embeddings for phylogenetic tree inference based solely on facial features. We use pre-trained ConvNets as image encoders to extract facial embeddings and apply hierarchical clustering algorithms to construct phylogenetic trees. Our methodology differs from previous approaches in that it does not rely on preconstructed phylogenetic trees, allowing for an independent assessment of the potential of facial embeddings to capture relationships between populations. We have evaluated our method with …a dataset of 30 ethnic classes, obtained by web scraping and manual curation. Our results indicate that facial embeddings can capture phenotypic similarities between closely related populations; however, problems arise in cases of convergent evolution, leading to misclassifications of certain ethnic groups. We compare the performance of different models and algorithms, finding that using the model with ResNet50 backbone and the face recognition module yields the best overall results. Our results show the limitations of using only facial features to accurately infer a phylogenetic tree and highlight the need to integrate additional sources of information to improve the robustness of population classification. Show more
Keywords: Convolutional neural networks, deep learning, hierarchical clustering, phylogenetic tree
DOI: 10.3233/JIFS-219343
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-09, 2024
Authors: Li, Yuangang | Gao, Xinrui | Ni, Hongcheng | Song, Yingjie | Deng, Wu
Article Type: Research Article
Abstract: In this paper, an adaptive differential evolution algorithm with multi-strategy, namely ESADE is proposed to solve the premature convergence and high time complexity for complex optimization problem. In the ESADE, the population is divided into several sub-populations after the fitness value of each individual is sorted. Then different mutation strategies are proposed for different populations to balance the global exploration and local optimization. Next, a new self-adaptive strategy is designed adjust parameters to avoid falling into local optimum while the convergence accuracy has reached its maximum value. And a complex airport gate allocation multi-objective optimization model with the maximum flight …allocation rate, the maximum near gate allocation rate, and the maximum passenger rate at near gate is constructed, which is divided into several single-objective optimization model. Finally, the ESADE is applied solve airport gate allocation optimization model. The experiment results show that the proposed ESADE algorithm can effectively solve the complex airport gate allocation problem and achieve ideal airport gate allocation results by comparing with the current common heuristic optimization algorithms. Show more
Keywords: Differential evolution, multi-strategy, self-adaptive strategy, gate allocation, optimization
DOI: 10.3233/JIFS-238217
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Sowndeswari, S. | Kavitha, E. | Krishnamoorthy, Raja
Article Type: Research Article
Abstract: The development of tiny sensing nodes efficient for wireless communication in Wireless Sensor Networks (WSNs) can be attributed to the rapid advancements in processors and radio technology. Data transmission occurs through multi-hop routing in WSN, which relies on nodes’ cooperation. The collaboration between nodes has rendered these networks susceptible to various attacks. It is imperative to employ a security scheme to evaluate the dependability of nodes in distinctive malicious nodes from non-malicious nodes. In recent years, there has been a growing significance placed on security-based routing protocols with energy constraints as valuable mechanisms for enhancing the security and performance of …WSNs. A novel solution called the Deep Learning-based Hybrid Energy Efficient and Security System (DL-HE2S2) is introduced to address these challenges. The research workflow encompasses various essential stages, namely the deployment of nodes, the creation of clusters, the selection of cluster heads, the detection of malevolent nodes within each group, and the determination of optimal paths intra- and inter-clusters employing the routing algorithm for efficient packet transmission. The design of the algorithm is focused on achieving energy efficiency and enhancing network security while also taking into account various performance metrics, including a mean network lifetime of 187.244 hours, a throughput of 59.88 kilobits per second, an end-to-end latency of 11.939 milliseconds, a packet loss of 14.9%, a packet delivery ratio of 99.194%, network security at 92.026%, and energy usage of 19.424 J. This research examines the algorithm’s scalability and efficiency across various network sizes using a Network Simulator (NS-2). DL-HE2S2 offers valuable insights that can be applied to practical implementations in multiple applications. Show more
Keywords: Wireless sensor networks, energy efficiency, secured routing, cluster
DOI: 10.3233/JIFS-235322
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Xu, Liwen | Chen, Jiali
Article Type: Research Article
Abstract: Node classification in graph learning faces significant challenges due to imbalanced data, particularly for under-represented samples from minority classes. To address this issue, existing methods often rely on synthetic minority over-sampling techniques, introducing additional complexity during model training. In light of the challenges faced, we introduce GraphECC, an innovative approach that addresses numerical anomalies in large-scale datasets by supplanting the traditional CE loss function with an Enhanced Complementary Classifier (ECC) loss function’a novel modification to the CCE loss. This alteration ensures computational stability and mitigates potential numerical anomalies by incorporating a slight offset in the denominator during the computation of …the complementary probability distribution. In this paper, we present a novel training paradigm, the Enhanced Complementary Classifier (ECC), which offers “imbalance defense for free” without the need for extra procedures to improve node classification accuracy.The ECC approach optimizes model probabilities for the ground-truth class, akin to the cross-entropy method. Additionally, it effectively neutralizes probabilities associated with incorrect classes through a “guided” term, achieving a balanced trade-off between the two aspects. Experimental results demonstrate that our proposed method not only enhances model robustness but also surpasses the widely used cross-entropy training objective.Moreover, we demonstrate the versatility of our method by seamlessly integrating it with various well-known adversarial training techniques, resulting in significant gains in robustness. Notably, our approach represents a breakthrough, as it enhances model robustness without compromising performance, distinguishing it from previous attempts.The code for GraphECC can be accessed from the following link:https://github.com/12chen20/GraphECC . Show more
Keywords: Imbalanced node classification, trade-off optimization, enhanced complementary classifier (ECC), graph learning, minority classes
DOI: 10.3233/JIFS-239663
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Ma, Nana | Wang, Lili | Long, Yuting
Article Type: Research Article
Abstract: Music has been utilized throughout history as a medium for cultural communication and artistic expression, embodying various nations’ and societies’ ideologies and experiences. Music culture communication is crucial for encouraging cultural diversity and understanding and developing social cohesion and community building among people. Music teaching management is the process of setting up, arranging, and executing music education programs in a manner that successfully teaches students the essential skills and information necessary for becoming proficient musicians. Users’ exact preferences for various areas of attraction cannot be determined, nor are users’ choices for traditional music recommendations sufficiently accurate. A recommender system estimates …or anticipates people’s preferences and offers appropriate recommendations. First, the sparsity problem emerges when insufficient data is accessible for the recommendation, and the coverage is one of the key drawbacks of social labeling. Cold start issues might be difficult since new music learners might not have given sufficient details about their musical tastes. Hence, the Hybridized Fuzzy logic-based Content and Collaborative Music Recommendation (HFC2MR) system is proposed to create personalized music teaching plans that are effective and engaging for each student based on their music preferences and learning outcomes. Enhanced Fuzzy C-Means clustering is used in collaborative recommendations to group users based on their shared musical tastes and to provide each user with more individualized, accurate music recommendations based on other users’ listening habits and preferences in the same cluster. Subsequently, an assessment of the recommender system using parameters like accuracy, precision, f1-score, and recall ratio is shown with optimal cluster selection. The coverage ratio is used to compare experimental data based on skill capacity covered through the assessment of music teaching. RMSE metric is used to evaluate the accuracy of students’ performance based on music attributes related to teaching goals. Show more
Keywords: Music teaching management, fuzzy logic, recommender system, clustering and similarity
DOI: 10.3233/JIFS-232422
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Zhou, Yue | Chen, Qiwei
Article Type: Research Article
Abstract: Studying the evolution of karst rocky desertification (KRD) in control areas of diverse geomorphologic types and its correlation with land use provides valuable insights for identifying priority areas and implementing effective treatment measures. Employing Remote Sensing (RS) and GIS, this research quantitatively examines the evolution of KRD and its relationship with land use in the karst mountain and gorge areas of Guizhou Province over the period 2010 to 2020. The findings reveal continuous improvement in KRD across the study areas, albeit with noticeable regional disparities. Notably, the karst mountain region exhibited significantly higher change areas and rates of KRD, non-KRD, …light KRD, and moderate KRD compared to the gorge area, underscoring better desertification control in the former region. A discernible correlation emerges between different karst geomorphologic types, the distribution and changes in land use types, and the evolution of KRD. Land use change emerges as a pivotal factor influencing the improvement of KRD in these areas. Changes in land use patterns corresponded with a decrease in KRD in dry land, other woodland, grassland, and bare land across both regions. However, the response of KRD to land use patterns varied across control areas with different geomorphologic environments, resulting in geographical differentiation in KRD evolution. Key land use conversions, notably from shrubland to forestland and dry land to garden land in the gorge, and shrubland to forestland in the mountain, contributed significantly to KRD dynamics in these regions. Notably, in the gorge area, KRD primarily occurred in garden land, other woodland, dry land, and grassland. In contrast, in the mountain area, KRD was prevalent in shrubland, dry land, and grassland, highlighting distinct responses and contributions to its evolution. The study observes substantial land use change in KRD-improved areas, particularly in the gorge region. Notably, the responsiveness of KRD to woodland conversions (shrubland, forestland, other woodland) varied across different geomorphologic environments. The dynamics of rocky desertification occurrence (RDO) and the occurrence structure of KRD in various land use types exhibited significant differences between the two regions. The gorge area demonstrated generally higher RDO, with a relatively stable and simpler occurrence structure of KRD compared to the more dynamic and varied structure observed in the mountain area. The sequencing of KRD occurrence in both areas displayed stability in specific land use types, with varying intensities noted between them. Show more
Keywords: Karst, rocky desertification, land use, evolution, geomorphology
DOI: 10.3233/JIFS-241536
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Qin, Hao | Zou, Yanli | Yu, Guoliang | Liu, Huipeng | Tan, Yufei
Article Type: Research Article
Abstract: In the process of mapping outdoor undulating and flat roads, existing LiDAR SLAM systems often encounter issues such as map distortion and ghosting. These problems arise due to the low vertical resolution of multi-line LiDAR, which easily leads to the occurrence of odometry height drift during the mapping process. To address this challenge, this study propose a novel LiDAR SLAM system named SOHD-LOAM, designed specifically to suppress odometry height drift. This system encompasses several critical components, including data preprocessing, front-end LiDAR odometry, back-end LiDAR mapping, loop detection, and graph optimization. SOHD-LOAM leverages the road gradient limitation algorithm and the height …smoothing algorithm as its core, while also integrating the Kalman filter, loop detection, and graph optimization techniques. To evaluate the performance of SOHD-LOAM, the comprehensive experiments are conducted with using KITTI datasets and real-world scenes. The experimental results demonstrate that SOHD-LOAM achieves superior accuracy and robustness in global odometry compared to the state-of-the-art LEGO-LOAM. Specifically, the height error of the sequences 00, 05 experiment was found to be 40.62% and 61.92% lower than that of LEGO-LOAM. Additionally, the maps generated by SOHD-LOAM exhibit no distortion or ghosting, thereby significantly enhancing map quality. Show more
Keywords: Autonomous driving, SLAM, odometry height drift, road gradient limitation, height smoothing, loop detection
DOI: 10.3233/JIFS-235708
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Wei, YuHan | Kim, Young-Ju
Article Type: Research Article
Keywords: Camel herd algorithm (CHA), camel-bat swarm optimization (CBSO), cultural and creative product (CCP) Design, graphic design
DOI: 10.3233/JIFS-236320
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Lalitha, S. | Sridevi, N. | Deekshitha, Devarasetty | Gupta, Deepa | Alotaibi, Yousef A. | Zakariah, Mohammed
Article Type: Research Article
Abstract: Speech Emotion Recognition (SER) has advanced considerably during the past 20 years. Till date, various SER systems have been developed for monolingual, multilingual and cross corpus contexts. However, in a country like India where numerous languages are spoken and often humans converse in more than one language, a dedicated SER system for mixed-lingual scenario is more crucial to be established which is the focus of this work. A self-recorded database that includes speech emotion samples with 11 diverse Indian languages has been developed. In parallel, a mixed-lingual database is formed with three popular standard databases of Berlin, Baum and SAVEE …to represent mixed-lingual environment for western background. A detailed investigation of GeMAPS (Geneva Minimalistic Acoustic Parameter Set) feature set for mixed-lingual SER is performed. A distinct set of MFCC (Mel Frequency Cepstral Coefficients) coefficients derived from sine and cosine-based filter banks enriches the GeMAPS feature set and are proven to be robust for mixed-lingual emotion recognition. Various Machine Learning (ML) and Deep Learning (DL) algorithms have been applied for emotion recognition. The experimental results demonstrate GeMAPS features classified from ML has been quite robust for recognizing all the emotions across the mixed-lingual database of the western languages. However, with diverse recording conditions and languages of the Indian self-recorded database the GeMAPS with enriched features and classified using DL are proven to be significant for mixed-lingual emotion recognition. Show more
Keywords: Emotion, GeMAPS, mixed-lingual, sine, cosine filter bank
DOI: 10.3233/JIFS-219390
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Bisht, Akhilesh | Gupta, Deepa
Article Type: Research Article
Abstract: Neural Machine Translation (NMT) for low resource languages is a challenging task due to unavailability of large parallel corpus. The efficacy of Transformer based NMT models largely depends on scale of the parallel corpus and the configuration of hyperparameters implemented during model training. This study aims to delve into and elucidate the impact of hyperparameters on the performance of NMT models for low resource languages. To accomplish this, a series of experiments are conducted using an open-source Hindi-Kangri corpus to train both supervised and semi-supervised NMT models. Throughout the experimentation process, a significant number of discrepancies were identified within the …data-set, necessitating manual correction. The best translation performance evaluated with respect to the metrics such as BLEU (0–1), SacreBLEU (0–100), Chrf (0–100), Chrf+ (0–100), Chrf++ (0–100) and TER (%) is (0.15, 14.98, 41.43, 41.49, 38.77, 68.20) for Hindi to Kangri direction, and (0.283, 28.17, 49.71, 50.64, 48.63, 51.25) for Kangri to Hindi direction. Show more
Keywords: Neural machine translation, low resource language, low resource MT, transformers, semi-supervised MT, Kangri, natural language processing
DOI: 10.3233/JIFS-219384
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Momena, Alaa Fouad | Gazi, Kamal Hossain | Mukherjee, Asesh Kumar | Salahshour, Soheil | Ghosh, Arijit | Mondal, Sankar Prasad
Article Type: Research Article
Abstract: Use of the Internet of Everything (IoE), the number of smart gadgets increasing rapidly giving the side effect of huge data, which has led to issues with traditional cloud computing models like inadequate security, slow response times, poor privacy, and bandwidth overload. Conventionally, cloud computing is no longer adequate for supporting the diversified needs of the user and the extraordinary society of data processing, so edge computing technologies have been revealed. This study considers edge computing in an educational institute in a scientific way. Multi criteria decision making (MCDM) is one of the most suitable decision making processes that propose …to choose optimal alternatives by considering multiple conflicting criteria. Entropy weighted method is considered to evaluate factor weight. Weighted Aggregated Sum Product Assessment (WASPAS) and Combined Compromise Solution (CoCoSo) based MCDM methodologies examine the ranking of alternatives for this study. Multiple decision makers (DMs) give opinions with Pentagonal Fuzzy Soft Set (PFSS) to express the uncertainty and fuzziness of the data set. The set operations and arithmetic operations of PFSS are discussed in detail. Also, a new de-fuzzification method of PFSS is proposed in this study. Calculated the criteria weight and prioritized the alternative based on source data. Lastly, sensitivity analysis and comparative analysis are conducted to check the stability of the result. Show more
Keywords: Edge computing, Academic institute, PFSS, Entropy, WASPAS, CoCoSo
DOI: 10.3233/JIFS-239887
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Jaiseeli, C. | Raajan, N.R.
Article Type: Research Article
Abstract: Medical and satellite image analysis require incredibly high resolution. Super-resolution combines several low-resolution images of the same scene to generate a high-resolution image. The Super resolution employing deep learning techniques still has an illumination issue. This paper proposes a novel CGIHE-VDSR algorithm that integrates the Very Deep Super Resolution (VDSR) Network with Color Global Image Histogram Equalization (CGIHE) to improve image resolution. In the proposed method, the low-resolution image is first histogram equalized using the CGIHE algorithm. Then, the VDSR network is applied to the histogram equalized image for super-resolution. The comparison of real-time data with the benchmark images is …done using the proposed algorithm in the MATLAB platform. The PSNR and SSIM metrics demonstrate that the super resolution image obtained using the proposed method is significantly better than the existing methods. Show more
Keywords: Histogram equalization, super-resolution, CNN, subsample image, VDSR, residual
DOI: 10.3233/JIFS-219392
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Javed, Hira | Sufyan Beg, M.M. | Akhtar, Nadeem | Alroobaea, Roobaea
Article Type: Research Article
Abstract: Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence …to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer. Show more
Keywords: Multimodal, captioning, summarization, etc
DOI: 10.3233/JIFS-219394
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Kostiuk, Yevhen | Tonja, Atnafu Lambebo | Sidorov, Grigori | Kolesnikova, Olga
Article Type: Research Article
Abstract: In this paper, we investigate the issue of hate speech by presenting a novel task of translating hate speech into non-hate speech text while preserving its meaning. As a case study, we use Spanish texts. We provide a dataset and several baselines as a starting point for further research in the task. We evaluated our baseline results using multiple metrics, including BLEU scores. We used a cross-validation approach and an average of the metrics per fold for evaluation. We achieved a 0.236 sentenceBLEU score on four folds. This study aims to contribute to developing more effective methods for reducing the …spread of hate speech in online communities. Show more
Keywords: Hate speech, translation, Spanish
DOI: 10.3233/JIFS-219348
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: George, Neetha | Ramachandran, Sivakumar | Jiji, C.V.
Article Type: Research Article
Abstract: Macula is the part of retina responsible for sharp and clear vision. Macular edema is caused by the accumulation of intraretinal fluid (IRF) in the macula, which is further distinguished by the compromised integrity of the blood-retinal barrier, particularly evident in the retinal vasculature. This results in swelling, that may lead to vision impairment and is the dominant sign of several ocular diseases, including age-related macular degeneration, diabetic retinopathy, etc. Quantitative analysis of the fluid regions in macular edema helps in ascertaining the severity as well as the response to treatment of the diseases. Optical coherence tomography (OCT) is a …major tool used by ophthalmologists for visualizing edema. The prevalent practice for diagnosing and treating macular edema involves measuring Central Retinal Thickness (CRT). Segmenting the IRF in OCT images offers the potential for a more accurate and better quantification of macular edema. This paper proposes a novel method combining convolutional neural network (CNN) and active contour model for segmenting the IRF to ascertain the severity of macular edema. The IRF region is initially segmented using an encoder-decoder architecture. Contour evolution is then performed on this segmented image to demarcate the IRF boundaries. The advantage of the method is that it does not require precisely labeled images for training the CNN. A comparison of the experimental results with models employing CNN alone and with other state-of-the art methods demonstrates the superior performance and consistency of the proposed method. Show more
Keywords: edema segmentation, convolutional neural network, active contour model
DOI: 10.3233/JIFS-219401
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-9, 2024
Authors: Wu, Donghui | Wang, Jinfeng | Zhao, Wanwan | Geng, Xin | Liu, Guozhi | Qiu, Sen
Article Type: Research Article
Abstract: Gesture recognition based on wearable sensors has received extensive attention in recent years. This paper proposes a gesture recognition model (CGR_ATT) based on Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) fused attention mechanism to improve accuracy rate of wearable sensors. First, CNN serves as a feature extractor, learning features automatically from sensor data by performing multiple layers of convolution and pooling operations, capturing spatial features of gestures. Furthermore, a temporal modeling unit GRU is introduced to capture the temporal dynamics in gesture sequences. By controlling the information flow through gate mechanisms, it effectively handles the temporal relationships in …sensor data. Finally, an attention mechanism is introduced to assign different weights to the hidden state of the GRU. By calculating the attention weights for each time period, the model automatically selects key time periods related to gesture movements. The GR-dataset proposed in this paper involves 910 sets of training parameters. The model achieves an ultimate accuracy of 97.57% . In compare with CLA-net, CLT-net, CGR, GRU, LSTM and CNN, the experimental results demonstrate that the proposed method has superior accuracy. Show more
Keywords: Wearable gesture recognition system, CGR_ATT model, deep learning, wearable devices
DOI: 10.3233/JIFS-240427
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Visvanathan, P. | Durai Raj Vincent, P.M.
Article Type: Research Article
Abstract: A Stroke is a sudden loss of blood circulation in certain parts of the brain that results in a loss of neurological function. To save a patient from stroke, an immediate diagnosis and treatment plan must be implemented. Artificial intelligence-based machine learning algorithms play a major role in the prediction. To predict a person likely to have a stroke, stroke healthcare data records must be accessed, which is very sensitive. Data shared for machine learning training pose security risks and have concerns about privacy. To overcome this issue, Genetic Algorithm and Federated Learning (GA-FL) –based hybridization approach is proposed to …predict the risk of stroke in a person. Federated Learning was developed by Google, which can provide security to the data during the training process because every client participating in this training process needs to exchange only the training parameters without sharing the data. In addition to the security features, a genetic algorithm was used to optimize the parameters required to train a model using the perceptron neural network model. The experimental results show that our proposed research model (GA-FL) provides security and predicts the risk of stroke more accurately than any other existing algorithm. Show more
Keywords: Federated learning, genetic algorithm, stroke risk, perceptron neural network
DOI: 10.3233/JIFS-236354
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Hu, Junhua | Zhou, Yingling | Li, Huiyu | Liang, Pei
Article Type: Research Article
Abstract: To enhance infection diseases interval prediction, an improved model is proposed by integrating neighborhood fuzzy information granulation (NNIG) and spatial-temporal graph neural network (STGNN). Additionally, the NNIG model can efficiently extract the most representative features from the time series data and identifies the support upper and lower bounds. NNIG model transfers time series data from numerical level to granular level, and processes data feed it into STGNN for interval prediction. Finally, experiments are conducted for evaluation based on the COVID-19 data. The results demonstrate that the NNIG outperforms baseline models. Further, it proves beneficial in offering a valuable approach for …policy-making. Show more
Keywords: Time series, fuzzy information granulation, interval prediction, spatial-temporal graph neural network
DOI: 10.3233/JIFS-236766
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Hossain, AKM B. | Salam, Md. Sah Bin Hj. | Alam, Muhammad S. | Hossain, AKM Bellal
Article Type: Research Article
Abstract: Semantic segmentation is crucial for the treatment and prevention of brain cancers. Several neural network–based strategies were rapidly presented by research groups to enhance brain tumor thread segmentation. The tumor’s uneven form necessitates the usage of neural networks for its detection. Therefore, improved patient outcomes may be achieved with precise segmentation of brain tumor. Brain tumors can range widely in size, form, and position, making diagnosis difficult. Thus, this work offers a Multi-level U-Net (MU-Net) approach for analyzing the brain tumor data augmentation for improved segmentation. Therefore, a significant amount of data augmentation is employed to successfully train the recommended …system, removing the problem of a lack of data when using MR images for the diagnosis of multi-grade brain cancers. Here, we presented the “Multi-Level Pyramidal Pooling (MLPP)” component, where a new pyramidal pool will be employed to capture contextual data for augmentation. The “High-Grade Glioma” (HGG) datasets from the Kaggle and BraTs2021 were used to assess the proposed MU-Net. Overall Tumor (OT), Enhancing Core (EC), and Tumor Core (TC) were the three main designations to be segmented. The dice score was used to contrast the results empirically. The suggested MU-Net fared better than most existing methods. Researchers in the fields of bioinformatics and medicine might greatly benefit from the high-performance MU-Net. Show more
Keywords: Brain tumor, Data Augmentation (DA), Multi-level U-Net (MU-Net), Multi-Level Pyramidal Pooling (MLPP), Adaptive Curvelet Transform (ACT), wavelet threshold
DOI: 10.3233/JIFS-232782
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Wu, Jie | Hou, Mengshu
Article Type: Research Article
Abstract: Table-based fact verification (TFV) is a binary classification task that requires understanding and reasoning about both table and text. This task poses many challenges, such as table parsing, text comprehension, and numerical reasoning. However, existing methods tend to depend solely on pre-trained models for tables, treating all types of reasoning equally and disregarding the importance of identifying logic types in inference process. In this regard, we propose MoETFV, an efficient and explanatory approach to solving TFV, which is based on a Mixture-of-Experts (MoE) framework. This approach can detect the underlying logic types of statements and leverage multiple independent experts to …emulate diverse logical reasoning. It consists of one shared expert for general semantic understanding and several specific experts with distinct responsibilities for different logical inferences. Moreover, the practical applications of the MoE method in TFV are thoroughly investigated. This model doesn’t necessitate any table pre-trained models, and aligns closely with human cognitive processes in addressing such issues. Experimental results demonstrate the innovation and feasibility of the proposed approach. Show more
Keywords: Tabular data, fact verification, mixture-of-experts, logical reasoning, natural language processing
DOI: 10.3233/JIFS-238142
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Chen, Longkai | Huang, Jingjing
Article Type: Research Article
Abstract: Urban traffic accidents impose a significant threat to public safety because of its frequent occurrence and potential for severe injuries and fatalities. Hence, an effective analysis of accident patterns is crucial for designing accident prevention strategies. Recent advancement in data analytics have provided opportunities to improve the pattern of urban traffic accidents. However, the existing works face several challenges in adapting the complex dynamics, and heterogeneity of the accident data. To overcome these challenges, we proposed an innovative solution by combining the K-means clustering and Support Vector Machine to precisely predict the traffic accident patterns. By leveraging the efficiencies of …clustering technique and machine learning, this work intends to identify the intricate patterns within the traffic database. Initially, a traffic accident database was collected and fed into the system. The collected database was pre-processed to improve and standardize the raw dataset. Further, cluster analysis is employed to identify distinct patterns within the dataset and group similar accidents into clusters. This clustering enables the system to recognize common accident scenarios and identify recent accident trends. Subsequently, a Support Vector Machine is deployed to classify accidents into distinct categories through intensive training with identified clusters. The combination enables the system to understand the complex relationships among diverse accident variables, making it an effective framework for real-time pattern recognition. The proposed strategy is implemented in Python and validated using the publicly available traffic accident database. The experimental results manifest that the proposed method achieved 99.65% accuracy, 99.53% precision, 99.62% recall, and 99.57% f-measure. Finally, the comparison with the existing techniques shows that the developed strategy offers improved accuracy, precision, recall, and f-measure compared to existing ones. shows that the developed strategy offers improved accuracy, precision, recall, and f-measure compared to existing ones. Show more
Keywords: Support vector machine, traffic accident pattern recognition, cluster analysis, machine learning
DOI: 10.3233/JIFS-241018
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Liu, Fei
Article Type: Research Article
Abstract: In China, aesthetic education at the college level is essential for students’ quality because it improves their understanding of art, helps them progress in their professional career development, and helps them comprehend more fully the attractiveness of creative creations. As a result, it needs to prioritize aesthetic education at the institution and endeavor to nurture students’ feelings progressively and improve their aesthetic abilities at different levels. Artificial intelligence (AI) is used in this project to create a novel, interdisciplinary teaching technique that will maximize students’ artistic and intellectual potential and help them make more, better art. In this research, the …Osprey Optimization method improves the interdisciplinary teaching technique for aesthetic education based on a light Exclusive gradient-boosting mechanism (OOM-LEGBM). The exploration-exploitation dynamics of the OOM are incorporated into LEGBM, providing the students with a tangible and relatable technique to understand complex-solving processes. This research develops an enhanced quality framework for college aesthetic education based on the multi-model data fusion system about the implication and necessity of aesthetic education. The influence of college aesthetic education on students’ creative capacity and artistic literacy was investigated to inform instructional activities better to develop students’ aesthetic skills. The experimental findings suggest that the proposed approach achieved an improved accuracy of 99.90%, higher precision of 99.88%, and greater recall of 99.91%. Moreover, it obtained a minimum Root Mean Square Error (RMSE) of 0.26% and a lower Mean Absolute Error (MAE) of 0.34%, showing that the suggested model greatly improved preference learning accuracy while keeping overall accuracy at an identical level. Innovation capacity building in college aesthetic education can help students become more self-aware, improve their study habits, visually literate, and more comprehensive. Show more
Keywords: Interdisciplinary teaching, aesthetic education, curriculum, multimodal data fusion, artificial intelligence, and big data
DOI: 10.3233/JIFS-240723
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Zhou, Yancong | Xu, Chenheng | Chen, Yongqiang | Li, Shanshan | Guo, Zhen
Article Type: Research Article
Abstract: Due to the complexity of the products from the ethanol coupling reaction, the C4 olefin yield tends to be low. Finding the optimal ethanol reaction conditions requires repeated manual experiments. In this paper, a novel learning framework based on least squares support vector machine and tree-structured parzen estimator is proposed to solve the optimization problem of C4 olefin production conditions. And shapley value is introduced to improve the interpretation ability of modeling method. The experimental results show that the proposed learning framework can obtain the combination of ethanol reaction conditions that maximized the C4 olefin yield It is nearly 17.30% …higher compared to the current highest yield of 4472.81% obtained from manual experiments. Show more
Keywords: C4 olefin production, complex problem optimization, model interpretability, LSSVM, SHAP, TPE
DOI: 10.3233/JIFS-235144
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Muthu Thiruvengadam, P. | Gnanavadivel, J.
Article Type: Research Article
Abstract: The Power solutions have become indispensable for all the devices in recent years with an appropriate power conversion circuitries and control methods to ensure good dynamic response, improved stability, reliability and efficiency. The main intent of this article is to impart the designing of interval type-2 fuzzy logic controller (IT2FLC) based interleaved Sepic power factor correction (PFC) converter. This work also involves the careful design of the robust controller with enhanced precision and good power quality (PQ) performance at the AC mains. In addition, the development of IT2FLC based power solution improves the overall power conversion with stabilized output in …the perspective of its quick rise time, less overshoot and fast settling time in comparison to other traditional controllers. Further, the uncertainties and issues associated with the conventional proportional integral (PI) and fuzzy logic controllers (FLCs) are handled effectively by the proposed IT2FLC controller. Moreover, this preferred converter is modeled with an internal parasitics and its performances are evaluated and compared with other conventional Zeigler Nicholas (ZN) tuned PI controller and FLC by dint of MATLAB/Simulink platform. Finally, the experimental test bench set up of 250 W, 48 V power circuitry is devised and the test outcomes confirm the excellent transient behavior and PQ performances of the modeled power solution. Show more
Keywords: Power quality, interval type-2 fuzzy logic controller, total harmonic distortion, power factor correction, discontinuous conduction mode and continuous conduction mode
DOI: 10.3233/JIFS-230325
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Belal, Mohamad Mulham | Sundaram, Divya Meena
Article Type: Research Article
Abstract: Visualization-based malware detection gets more and more attention for detecting sophisticated malware that traditional antivirus software may miss. The approach involves creating a visual representation of the memory or portable executable files (PEs). However, most current visualization-based malware classification models focus on convolution neural networks instead of Vision transformers (ViT) even though ViT has a higher performance and captures the spatial representation of malware. Therefore, more research should be performed on malware classification using vision transformers. This paper proposes a multi-variants vision transformer-based malware image classification model using multi-criteria decision-making. The proposed method employs Multi-variants transformer encoders to show different …visual representation embeddings sets of one malware image. The proposed architecture contains five steps: (1) patch extraction and embeddings, (2) positional encoding, (3) multi-variants transformer encoders, (4) classification, and (5) decision-making. The variants of transformer encoders are transfer learning-based models i.e., it was originally trained on ImageNet dataset. Moreover, the proposed malware classifier employs MEREC-VIKOR, a hybrid standard evaluation approach, which combines multi-inconsistent performance metrics. The performance of the transformer encoder variants is assessed both on individual malware families and across the entire set of malware families within two datasets i.e., MalImg and Microsoft BIG datasets achieving overall accuracy 97.64 and 98.92 respectively. Although the proposed method achieves high performance, the metrics exhibit inconsistency across some malware families. The results of standard evaluation metrics i.e., Q, R, and U show that TE3 outperform the TE1, TE2, and TE4 variants achieving minimal values equal to 0. Finally, the proposed architecture demonstrates a comparable performance to the state-of-the-art that use CNNs. Show more
Keywords: Vision transformer, MCDM, VIKOR, MEREC, image malware classifier
DOI: 10.3233/JIFS-235154
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-21, 2024
Authors: Wang, R | Yu, F.S | Zhao, L.Y
Article Type: Research Article
Abstract: This paper demonstrates a fuzzy decentralized dynamic surface control (DSC) scheme for switched large-scale interconnected nonlinear systems under arbitrary switching, which contains non-strict feedback form and unknown input saturation uncertainties. An auxiliary design system is established to handled input saturation. Uncertainties of non-strict feedback form are learned by fuzzy logic systems (FLSs) approximators, DSC method is designed to conquer “explosion of complexity” inherented by repeated differential of virtute controller in backstepping approach. Ii is shown that based on common Lyapunov function (CLF) design and analysis scheme, all the closed-loop systems signals are uniformly ultimately bounded (UUB), simulation results are provided …to demonstrate the effectiveness of this proposed strategy. Show more
Keywords: DSC scheme, large-scale switched nonlinear systems(LSSNs), input saturation, non-strict feedback (NSF) form
DOI: 10.3233/JIFS-238024
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Hassan, Shabbir
Article Type: Research Article
Abstract: The CPU scheduling technique influences the performance and efficiency of operating systems. Round-robin scheduling algorithm is ideal for time-shared systems, but it is not optimal for real-time operating systems since it yields more context switching, longer waiting time, and high turnaround time. The performance of the algorithm is predominantly influenced by the designated time quantum; however, determining a suitable time quantum is extremely challenging. This paper presents a CPU scheduling algorithm that provides a better tradeoff between waiting time, turnaround time, response time, and number of context switch by using hypothesis-based quanta generation approach. It combines the CPU burst …requirements of actual processes with some noisy data and plots them against the presumed CPU quanta to get quanta densities so that a polynomial regression model can fit the data points with the highest adjusted R-squared. Then applying some complex inferential statistic, the required quanta is obtained. The scheduling is dynamic in nature because it generates the next CPU quanta in reference to the quanta that have been used in the previous cycle with remaining CPU burst requirements of the process, and it is also adaptive in nature because, at each cycle, it uses ‘d’ (5, 5, 4, 3, 2) degree of freedom to calculate the Jarque-Bera Statistics to accept/reject the hypothesis. The algorithm is implemented in ‘R’ and the performance has been evaluated on a sample size of five processes with some noisy data which outperforms the conventional RR and significantly reduces the performance parameters mentioned above. Implementing this algorithm to a time-sharing or distributed environment will undoubtedly improve system performance and will help to avoid issues like thrashing, incorporate aging, CPU affinity, and starvation. Since the proposed algorithm is work-conservative, therefore can be implemented in network packet switching, statistical multiplexing, and real-time systems. Show more
Keywords: Adaptive scheduling, context switching, CPU burst, jarque-bera, kernel density estimation, kurtosis, quanta, thrashing
DOI: 10.3233/JIFS-238624
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Alqaissi, Eman | Alotaibi, Fahd | Ramzan, Muhammad Sher | Algarni, Abdulmohsen
Article Type: Research Article
Abstract: The influenza virus can spread easily, causing significant public health concern. Despite the existence of different techniques for rapid detection and prevention of influenza, their efficiency varies significantly. Additionally, there is currently a lack of a comprehensive, interoperable, and reusable real-time model for detecting influenza infection and predicting relationships within the field of influenza analysis. This study proposed a comprehensive, real-time model for rapid and early influenza detection using symptoms. Further, new relationships in the influenza field were discovered. Multiple data sources were used for the influenza knowledge graph (KG). Throughout this study, various graph algorithms were utilized to extract …significant nodes and relationship features and multiple influenza detection machine learning (ML) models were compared. Node classification and link prediction methods were employed on a multi-layer perceptron (MLP) model. Furthermore, the hyperparameters of the model were automatically tuned. The proposed MLP model demonstrated the lowest rate of loss and the highest specificity, accuracy, recall, precision, and F1-score compared to state-of-the-art ML models. Moreover, the Matthews correlation coefficient was promising. This study shows that graph data science can improve MLP model detection and assist in discovering hidden connections in influenza KG. Show more
Keywords: Influenza detection, knowledge graph, graph multi-layer perceptron model, graph algorithms, automatic tuning, real-time analysis
DOI: 10.3233/JIFS-233381
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-22, 2024
Authors: Chen, Sian | Zuo, Yajuan | Wang, Rui
Article Type: Research Article
Abstract: Traditional rule-based and statistical methods have limitations when dealing with complex language structures and semantics. In neural network machine translation algorithms, the objective function is usually to improve the accuracy of n-ary words. However, this does not guarantee a more natural and accurate translation. To overcome these challenges, this paper proposes an optimization algorithm for English natural translation processing based on neural networks, which combines Generative Adversarial Network (GAN) and Transformer models. In GAN, the generative model uses the Transformer model to generate false samples, while the discriminative model uses a binary classifier based on convolutional neural networks and attention …mechanisms to distinguish between true and false samples. During the training process, reinforcement learning algorithms are added to evaluate and adjust the generated sentences, and the parameters of the generated model are updated. The classification results of the discriminative model are used together with the Bilingual Evaluation Basis Value (BLEU) objective function to evaluate false samples, and the results are fed back to the generating model to guide parameter updates and optimization. Extensive experiments were conducted on a standard English-Chinese machine translation dataset to evaluate our method. Compared with the benchmark model that only uses supervised learning methods, our neural network-based optimization algorithm for English natural translation processing has achieved significant improvements in translation quality. According to statistical comparison, compared with the Transformer model (BLUE = 33.63 and AP = 90%) and the deep learning model based on long-term and short-term memory (BLUE = 30.26 and AP = 83%), the GAN and Transformer models proposed as the best framework exhibit better performance in bilingual evaluation deficiency (BLEU) (34.35) and accuracy (AP = 95%). Show more
Keywords: Artificial neural network, English translation, GAN, generator, discriminator, transformer model; Adam optimization algorithm, reinforcement learning method
DOI: 10.3233/JIFS-237181
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Kannan, Jeevitha | Jayakumar, Vimala | Pethaperumal, Mahalakshmi | Shanmugam, Nithya Sri
Article Type: Research Article
Abstract: Every day, the globe becomes more contemporary and industrialized. As a result, the number of vehicles and engines is growing. However, the energy sources utilized in these engines are scarce and dwindling over time. This circumstance prompts the search for alternate fuel. As civilization develops, transportation becomes a need for daily living. The largest issue is the diminishing supply of fossil fuels and the expanding population. As a result, everyone needs alternate energy sources for their automobiles. Therefore, in this investigation, we identify the best substitute for petrol. We offer the similarity measure(SM) for a hybrid structure of a Linear …Diophantine Multi-Fuzzy Soft Set(LDMFSS) with the goal of determining this issue. Because the range of grade values has been expanded, decision-makers now have greater freedom in selecting their grade. An exemplary case study is illustrated that shows the appropriateness of our recommended approach. A comparative analysis is provided to show the outcomes of the proposed method are more achievable and beneficial than those of the existing methodologies. Additionally, its applicability and attainability are evaluated by comparing its structure to those of the already used procedures. Show more
Keywords: Linear diophantine multi-fuzzy soft set, similarity measures, fossil fuels, alternative fuel, fuel specifications
DOI: 10.3233/JIFS-219415
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Bai, Hao | Wang, Wubin | Tang, Hao | Li, Xin | Zhao, Yinting | Lv, Dongqin
Article Type: Research Article
Abstract: This study utilized several coupled approaches to create powerful algorithms for forecasting the compressive strength (C s ) of concretes that include metakaolin (MK ) and fly ash (FA ). For this purpose, three various methods were considered, named random forests (RF ), Categorical boosting model (CB ), and extreme gradient boosting (XGB ) by considering the seven most influential input variables. It was tried to divide the concrete components to binder value (B ) to achieve the non-dimensional input variables. Herein, the cutting-edge Tasmanian devil Optimization (TDO ) algorithm was linked with RF , XGB , and CB …for the purpose of determining the optimal values of hyperparameters (named TD - CB , TD - RF , and TD - XG ). It is worth mentioning that developing the mentioned algorithms optimized with TD to estimate the mechanical properties of the concrete containing several important admixtures can be recognized as this study’s contribution to practical applications. The findings indicate that the algorithms possess a notable capacity to precisely forecast the C s of concrete, which includes MK and FA , with R 2 bigger than roughly 0.97. The lower value of OBJ comprehensive index belonged to the TD - CB at 1.5762, followed by TD - XG at 1.9943 and then 2.3317 related to TD - RF with almost 70% reduction. The sensitivity analysis demonstrated that the prediction of C s is highly influenced by all input parameters, which are higher than 0.8659, but a higher influence from MK /B at 0.9548. Show more
Keywords: Modified concrete, metakaolin, fly ash, unary and binary mix, estimation, categorical boosting
DOI: 10.3233/JIFS-242189
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Atef, Shimaa | El-Seidy, Essam | Abd El-Salam, Salsabeel M.
Article Type: Research Article
Abstract: Relatedness is necessary and causal in the development of social life. Interlayer relatedness is a measure of how one player’s decisions affect the decisions of other players in the game. The relatedness can be positive or negative. We had to determine how effective each strategy was under specific conditions, and how the correlation between players affected their payoffs. In this paper, we analytically study the strategies that enforce linear payoff relationships in the Iterated Prisoner’s Dilemma (IPD) game considering both a relatedness factor. As a result, we first reveal that the payoffs of two players and three players can be …represented by the form of determinants as shown by Press and Dyson even with the factor. Show more
Keywords: Equalizer, iterated prisoner’s dilemma (IPD), relatedness, two-player, three-player, zero-determinant strategies (ZD)
DOI: 10.3233/JIFS-239406
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Zhong, Qiao | Zou, Fang | Zhong, Ling
Article Type: Research Article
Abstract: Traditional fuzzy decision-making methods still have certain limitations in practical applications, such as the problem of the sum of attribute memberships and non-memberships possibly exceeding 1. Additionally, due to the attributes in real decision-making processes often not being mutually independent but rather exhibiting a certain degree of correlation, traditional fuzzy decision-making methods may not fully capture and express this complexity. To overcome these limitations, this paper proposes a new multi-attribute decision-making method addressing the problem of integrating information with correlated attributes in the generalized spherical fuzzy environment. Initially, by combining the generalized spherical fuzzy set with the Heronian averaging operator, …the paper introduces the generalized spherical fuzzy weighted Heronian averaging operator and thoroughly discusses some valuable properties of both operators, providing corresponding proofs. Furthermore, the paper proposes the multi-attribute decision-making method using the generalized spherical fuzzy weighted Heronian averaging operator, enriching not only the theoretical framework of multi-attribute decision-making methods but also offering more possibilities for practical applications. Finally, the application of this method in the field of commercial bank lending decision-making will be further explored to enhance the accuracy and efficiency of credit decisions, reduce risks, and promote the healthy development of the banking industry. Show more
Keywords: Heronian mean operator, generalised spherical fuzzy Heronian operator, multi-attribute decision-making
DOI: 10.3233/JIFS-241113
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Pethaperumal, Mahalakshmi | Jayakumar, Vimala | Edalatpanah, Seyyed Ahmed | Mohideen, Ashma Banu Kather | Annamalai, Surya
Article Type: Research Article
Abstract: The global healthcare systems have encountered unparalleled difficulties due to the COVID-19 pandemic, underscoring the crucial significance of effective management within healthcare supply chains. This research contributes to the field of healthcare supply chain management by presenting a robust MADM methodology called lattice ordered(Lq * ) q-rung orthopair multi-fuzzy soft set(Lq * q-ROMFS -MADM) for supplier evaluation and ranking amidst the challenges posed by the COVID-19 pandemic. Taking inspiration from multi-fuzzy soft set and q-rung orthopair fuzzy set, the present research article proposes a novel framework known as Lq * q-rung orthopair multi-fuzzy soft …set (Lq * qROMFSS ), which incorporates lattice ordering in q-rung orthopair multi-fuzzy soft set. The effectiveness of the proposed model is confirmed through successful experimentation on various important operations, including union, intersection, complement, restricted union and intersection. Moreover, the verification of De Morgan’s laws for Lq * qROMFSS is carried out specifically for these operations mentioned above. To highlight the significance of the proposed Lq * qROMFSS , a multi-attribute decision-making (MADM) problem is presented, showcasing its application in the domain of healthcare supply chain management. Furthermore, a comparative analysis is conducted to elucidate the advantages of this model in comparison to existing models. Show more
Keywords: Lattice ordered multi-fuzzy soft set, q-rung orthopair multi-fuzzy soft set, Lq* q-rung orthopair multi-fuzzy soft set, supplier selection, multi-attribute decision-making
DOI: 10.3233/JIFS-219411
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Annamalai, Surya | Jayakumar, Vimala
Article Type: Research Article
Abstract: The Hypersoft set (HSS) theory was created by extending the soft set (SS) theory. The q-Rung linear diophantine fuzzy set (q-RLDFS) is a major development in fuzzy set theory (FS). By fusing q-RLDFS with HSS, the concept of q-rung linear diophantine fuzzy hypersoft set (q-RLDFHSS) is presented in this study. This study also discusses the concepts of lattice ordered q-RLDFHSS (LOq-RLDFHSS) and LOq-RLDFHS Matrix (LOq-RLDFHSM) as well as some standard operations of LOq-RLDFHSM. A medical diagnosis methodology based on LOq-RLDFHSM is proposed to evaluate multi-sub-attributed medical diagnosis difficulties incredibly well along with a diagnosis problem based on patients with comorbidities. …Further, between the proposed and current theories, comparison analysis and discussion have been given in this study. Show more
Keywords: q-Rung linear diophantine fuzzy set (q-RLDFS), hypersoft set(HSS), lattice, medical diagnosis
DOI: 10.3233/JIFS-219414
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Amrutha Raj, V. | Malu, G.
Article Type: Research Article
Abstract: Deep learning has gained popularity across several industries, including object recognition and classification. In the case of Convolutional Neural Networks (CNN), the first layers extract the most noticeable elements, such as shape and margin. As the model progresses, it learns to extract more complex features such as texture and color; conversely, skeleton features encompass significant locations (joints) that do not naturally align with the grid-like architecture intended for these networks. This study emphasizes the importance of structural features in enhancing the performance of deep learning models. It introduces the Gesture Analysis Module Network (GAMNet), which computes abstract structural values within …the architecture for feature extraction, prioritization, and classification. These values go through a rigorous evaluation process along with the cutting-edge deep learning model, CNN, and result in intermediate representations, leading to better performance in gesture analysis. An automated dance gesture identification system can address the challenges of recognizing hand movements in unpredictable lighting, varied backgrounds, noise, and changing camera angles. Despite these challenges, GAMNet performed remarkably well, surpassing renowned models like VGGNet, ResNet, EfficientNet, and CNN, achieving a classification accuracy of 96.80%, even in challenging image circumstances. This paper highlights how GAMNet can revolutionize the world of classical Indian dance, opening up new opportunities for research and development in this field. Show more
Keywords: Data augmentation, deep architecture, gesture recognition, structural features, skeleton, convolutional neural network
DOI: 10.3233/JIFS-219395
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Asthana, Amit | Dwivedi, Sanjay K.
Article Type: Research Article
Abstract: Understanding machine translation (MT) quality is becoming more and more important as MT usage continues to rise in the translation industry. The acceptance of MT output based on their performance and, ultimately, how acceptable the translators actually are, have received relatively less attention so far. MT plays a vital role in CLIR systems and their retrieval efficiency is directly proportional to the translation accuracy of the queries. The varied meanings of words, sentences carrying multiple interpretations, and differing grammatical structures across languages contribute to the complexity of the MT task. The lack of structural constraints and the presence of ambiguity …further compound the complications especially in case of web queries. The objective of this work is to assess the accuracy of free online translators in translating Hindi web queries. The accuracy of the translators has been evaluated on various metrics, i.e., BLEU, NIST, METEOR, hLepor, CHRF and GLEU. Our findings indicate that the translation accuracy for longer queries is higher than the shorter ones. Overall Google translator’s performance has been found the best while Systran performs the worst with 42.06% performance difference between the two. The present work intends to help researchers in further evaluating and analyzing the MT systems specially in context of web query translation, ultimately leading to improved translation quality and retrieval accuracy in CLIR. Show more
Keywords: Machine translation, evaluation metrics, Hindi web query
DOI: 10.3233/JIFS-235532
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-10, 2024
Authors: Rong, Mansong | Wei, Yuan | Xiao, Zhijun | Peng, Hongchong | Schröder, Kai-Uwe
Article Type: Research Article
Abstract: In order to improve the identification accuracy of bearing fault diagnosis, overcome the training difficulties and poor generalization ability of fault diagnosis model under the condition of small samples, this work constructs the LSTM-GAN model by combining long short-term memory network (LSTM) with generative adductive neural network (GAN). Firstly, LSTM is used to build a generator to generate adversarial neural network model, and the feature extraction capability of LSTM is adopted to improve the quality of generated samples. Then, the convolutional neural network (CNN) is improved to enhance its classification ability, and the improved CNN is used to classify faults. …Finally, CNN and convolutional autoencoder (CAE) are used to diagnose bearing faults under different working conditions to enhance the diagnostic effect of the model under different working conditions. The results show that LSTM-GAN can capture the feature information in the original data well, and the generated samples can improve the diagnosis accuracy of bearing fault diagnosis under the condition of small samples. The diagnostic model still has high accuracy under different working conditions, which provides support for the research and application of bearing fault diagnosis. Show more
Keywords: Fault diagnosis, data enhancement, variable working conditions, deep learning
DOI: 10.3233/JIFS-240105
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Zhang, Hongling | Zhang, Hongzhi
Article Type: Research Article
Abstract: The qualities of the materials employed to manufacture concrete are significantly impacted by high temperatures, which results in a noticeable decrease in the material’s strength characteristics. Concrete must be worked very hard and allowed to reach the required compressive strength (f c ). Nevertheless, a preliminary estimation of the desired outcome may be made with an outstanding degree of reliability by using supervised machine learning algorithms. The study combined the Dingo optimization algorithm (DOA), Coot bird optimization (COA), and Artificial rabbit optimization (ARO) with Random Forests (RF) evaluation to determine the f c of concrete at high …temperatures. The abbreviations used for the combined methods are RFD, RFC, and RFA, respectively. Remarkably, removing the temperature (T ) parameter from the input set leads to a remarkable 1100% improvement in the effectiveness index (PI) and normalized root mean squared error (NRMSE), while causing a significant fall in the coefficient of determination (R 2 ). The findings suggest that all RFD, RFC, and RFA have substantial promise in properly forecasting the f c of concrete at high temperatures. More precisely, the RFD algorithm demonstrated exceptional precision with R 2 values of 0.9885 and 0.9873 throughout the training and testing stages, respectively. Through a comparison of the error percentages for RFD, RFC, and RFA in error-based measurements, it becomes evident that RFD exhibits an error rate that is about 50% smaller compared to that of RFC and RFA. This prediction is crucial for various industries and applications where concrete structures are subjected to elevated temperatures, such as in fire resistance assessments for buildings, tunnels, bridges, and other infrastructure. By accurately forecasting the compressive strength of concrete under these conditions, engineers and designers can make informed decisions regarding the material’s suitability and performance in high-temperature environments, leading to enhanced safety, durability, and cost-effectiveness of structures. Show more
Keywords: Concrete, elevated temperature, strength, random forests, Dingo optimization algorithm, sensitivity analysis
DOI: 10.3233/JIFS-240513
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: John, Manu | Mathew, Terry Jacob | Bindu, V.R.
Article Type: Research Article
Abstract: Content-Based Image Retrieval (CBIR) is a technique that involves retrieving similar images from a large database by analysing the content features of the query image. The heavy usage of digital platforms and devices has in a way promoted CBIR and its allied technologies in computer vision and artificial intelligence. The process entails comparing the representative features of the query image with those of the images in the dataset to rank them for retrieval. Past research was centered around handcrafted feature descriptors based on traditional visual features. But with the advent of deep learning the traditional manual method of feature engineering …gave way to automatic feature extraction. In this study, a cascaded network is utilised for CBIR. In the first stage, the model employs multi-modal features from variational autoencoders and super-pixelated image characteristics to narrow down the search space. In the subsequent stage, an end-to-end deep learning network known as a Convolutional Siamese Neural Network (CSNN) is used. The concept of pseudo-labeling is incorporated to categorise images according to their affinity and similarity with the query image. Using this pseudo-supervised learning approach, this network evaluates the similarity between a query image and available image samples. The Siamese network assigns a similarity score to each target image, and those that surpass a predefined threshold are ranked and retrieved. The suggested CBIR system undergoes testing on a widely recognized public dataset: the Oxford dataset and its performance is measured against cutting-edge image retrieval methods. The findings reveal substantial enhancements in retrieval performance in terms of several standard benchmarks such as average precision, average error rate, average false positive rate etc., providing strong support for utilising images from interconnected devices. Show more
Keywords: CBIR, siamese neural networks, deep learning, computer vision, clustering
DOI: 10.3233/JIFS-219396
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Kather Mohideen, Ashma Banu | Jayakumar, Vimala | Pethaperumal, Mahalakshmi | Kannan, Jeevitha
Article Type: Research Article
Abstract: As the globe enters a new era, web applications will become indispensable to managing business. Businesses can easily grow, become simpler, and accomplish their objective much faster by employing web applications. Creating a web application in cloud computing allows for the more affordable leveraging of cloud-based services. This makes it easier to avoid setting up and maintaining several servers. To get around cloud computing’s built-in restrictions such as scalability, security, and bandwidth limitations, the future smart world of cloud computing will be coupled with LiFi connectivity. Beyond creating the web application, it is important to promote this web application among …the network of users as quickly and effectively as possible. This manuscript proposes a strategy to address these challenges. There are two primary components to this MCDM technique. The first step is to model the problem as a graph and weigh the edges by employing the Hamacher aggregation operator. The second step involves using a fresh iteration of Kruskal’s technique in conjunction with this approach to discover a Minimum Spanning Tree as a resolution. This manuscript adds to the literature by solving real-world Minimum Spanning Tree problems by combining existing algorithms with MCDM techniques. This technique is demonstrated for marketing a web application(created via cloud service) in a future smart world using LiFi technology. Show more
Keywords: Cloud computing, LiFi technology, Kruskal’s technique, minimum spanning tree, Hamacher aggregation operator
DOI: 10.3233/JIFS-219423
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Jenefa, A. | Taurshia, Antony | Edward Naveen, V. | Kuriakose, Bessy M. | Thiyagu, T.M.
Article Type: Research Article
Abstract: In the realm of digital imaging, enhancing low-resolution images to high-definition quality is a pivotal challenge, particularly crucial for applications in medical imaging, security, and remote sensing. Traditional methods, primarily relying on basic interpolation techniques, often result in images that lack detail and fidelity. GANSharp introduces an innovative GAN-based framework that substantially improves the generator network, incorporating adversarial and perceptual loss functions for enhanced image reconstruction. The core issue addressed is the loss of critical information during down-sampling processes. To counteract this, we proposed a GAN-based method leveraging deep learning algorithms, trained using sets of both low- and high-resolution images. …Our approach, which focuses on expanding the generator network’s size and depth and integrating adversarial and perceptual loss, was thoroughly evaluated on various benchmark datasets. The experimental results showed remarkable outcomes. On the Set5 dataset, our method achieved a PSNR of 34.18 dB and a SSIM of 0.956. Comparatively, on the Set14 dataset, it yielded a PSNR of 31.16 dB and an SSIM of 0.920, and on the B100 dataset, it achieved a PSNR of 30.51 dB and an SSIM of 0.912. These results were superior or comparable to those of existing advanced algorithms, demonstrating the proposed method’s potential in generating high-quality, high-resolution images. Our research underscores the potency of GANs in image super-resolution, making it a promising tool for applications spanning medical diagnostics, security systems, and remote sensing. Future exploration could extend to the utilization of alternative loss functions and novel training techniques, aiming to further refine the efficacy of GAN-based image restoration algorithms. Show more
Keywords: Adversarial network training, enhanced image generation, image refinement, advanced neural architecture, improved resolution, quality assessment metrics, structural similarity evaluation
DOI: 10.3233/JIFS-238597
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Wang, Tianxing | Huang, Bing
Article Type: Research Article
Abstract: This paper makes a significant contribution to the field of conflict analysis by introducing a novel Interval-Valued Intuitionistic Fuzzy Three-Way Conflict Analysis (IVIFTWCA) method, which is anchored in cumulative prospect theory. The method’s key innovation lies in its use of interval-valued intuitionistic fuzzy numbers to represent an agent’s stance, addressing the psychological dimensions and risk tendencies of decision-makers that have been largely overlooked in previous studies. The IVIFTWCA method categorizes conflict situations into affirmative, impartial, and adverse coalitions, leveraging the evaluation of the closeness function and predefined thresholds. It incorporates a reference point, value functions and cumulative weight functions to …assess risk preferences, leading to the formulation of precise decision rules and thresholds. The method’s efficacy and applicability are demonstrated through detailed examples and comparative analysis, and its exceptional performance is confirmed through a series of experiments, offering a robust framework for real-world decision-making in conflict situations. Show more
Keywords: Three-way decision, conflict analysis, interval-valued intuitionistic fuzzy sets, cumulative prospect theory
DOI: 10.3233/JIFS-238873
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Pethaperumal, Mahalakshmi | Jayakumar, Vimala | Kannan, Jeevitha | Shanmugam, Nithya Sri
Article Type: Research Article
Abstract: The global challenges associated with urbanization and the escalating waste production have been magnified in recent times, particularly in the context of the COVID-19 pandemic. In response to these challenges, municipal authorities, especially in developing nations, are confronted with the imperative task of discerning the most suitable healthcare waste (HCW) disposal methods. These methods are crucial for the effective management of medical waste, both during and after the COVID-19 era. This study introduces a novel similarity measure designed for lattice ordered q-rung orthopair multi-fuzzy soft sets (Lq * q-ROMn FSSs) and exploring some of their essential characteristics. Currently, …no established methods are available for gauging the similarity of Lq * q-ROMn FSSs sets. Therefore, this paper takes a pioneering step by presenting similarity measures tailored for Lq * q-ROMn FSSs sets. Moreover, we propose an evaluation methodology that leverages the lattice ordered q-rung orthopair multi-fuzzy soft information to determine the optimal health care waste (HCW) disposal approach. This approach seeks to enhance decision-making within the realm of waste management, facilitating more informed and effective choices in handling healthcare waste. Show more
Keywords: Multi-fuzzy soft set, Lq* q-rung orthopair multi-fuzzy soft set, Lq* q-ROMnFS matrix, Lq* q-ROMnFS similarity measures, healthcare waste disposal technique
DOI: 10.3233/JIFS-219412
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Thirugnanasammandamoorthi, Puviyarasi | Kumar, Harsh | Ghosh, Debabrata | Dhasarathan, Chandramohan | Dewangan, Ram Kishan
Article Type: Research Article
Abstract: Sentiment analysis is a method of analyzing emotions and using text analysis techniques with natural language processing methods. Sentiment analysis uses data from various sources to identify the user’s attitude through different aspects. It is widely used for extracting opinions and recognizing sentiments, which helps Business organizations understand the user’s needs. This paper proposes a simple but compelling sentiment analysis method, showing the combined scores based on positive and negative words. Then, the tweets are categorized as Neutral, Negative, or Positive according to the scores. Sentiment analysis and opinion mining have grown significantly in the last decade. Different studies in …this domain try to determine people’s feelings, opinions, and emotions about something or someone. The main objective of this analysis is to determine the sentiment of the review using a machine learning model and then compare the result with the manual review of the data. This would allow researchers to represent and analyze opinions objectively across different domains. A hybrid method that combines a supervised machine learning algorithm with natural language processing techniques is suggested for review analysis. This project aims to find the best model to predict the sentiment of the tweets on airlines. During the research process and considering various methods and variables that should be considered, we found that methods like naïve Bayes and random forest were not fully explored. The proposed system improves an effective and more feasible method for sentimental analysis using machine learning, multinomialNB, linear regression, and regular expression. Show more
Keywords: Sentiment analysis, machine learning, regular expression, multinomialNB, public sentiments, social media analysis
DOI: 10.3233/JIFS-219417
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Shivkumar, S. | Amudha, J. | Nippun Kumaar, A.A.
Article Type: Research Article
Abstract: Navigation of a mobile robot in an unknown environment ensuring the safety of the robot and its surroundings is of utmost importance. Traditional methods, such as pathplanning algorithms, simultaneous localization and mapping, computer vision, and fuzzy techniques, have been employed to address this challenge. However, to achieve better generalization and self-improvement capabilities, reinforcement learning has gained significant attention. The concern of privacy issues in sharing data is also rising in various domains. In this study, a deep reinforcement learning strategy is applied to the mobile robot to move from its initial position to a destination. Specifically, the Deep Q-Learning algorithm …has been used for this purpose. This strategy is trained using a federated learning approach to overcome privacy issues and to set a foundation for further analysis of distributed learning. The application scenario considered in this work involves the navigation of a mobile robot to a charging point within a greenhouse environment. The results obtained indicate that both the traditional deep reinforcement learning and federated deep reinforcement learning frameworks are providing 100% success rate. However federated deep reinforcement learning could be a better alternate since it overcomes the privacy issue along with other advantages discussed in this paper. Show more
Keywords: Federated deep reinforcement learning, navigation, path-planning, mobile robot, robotics
DOI: 10.3233/JIFS-219428
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Wu, Meiqin | Ma, Linyuan | Fan, Jianping
Article Type: Research Article
Abstract: This article proposes an expert-driven consensus and decision-making model that comprehensively considers expert behavior in Multi-criteria decision-making (MCDM) scenarios. Under the premise that experts are willing to adjust their viewpoints, the framework strives to reach group consensus to the utmost degree feasible. To tackle experts’ uncertainty during the evaluation process, this article employs the rejection degree in the picture fuzzy sets (PFS) to signify the level of ignorance while they deliver their evaluation opinions. Due to the diversity of expert views, reaching a group consensus is difficult in reality. Therefore, this article additionally presents a strategy for adjusting the weights …of experts who did not reach consensus. This approach upholds data integrity and guarantees the precision of the ultimate decision. Finally, this article confirms the efficiency of the aforementioned model by means of a case study on selecting the optimal carbon reduction alternative for Chinese power plants. Show more
Keywords: Picture fuzzy sets (PFS), weight of experts, behavior-driven, Multi-criteria decision-making (MCDM), Consensus reaching process (CRP)
DOI: 10.3233/JIFS-238151
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Liang, Hailin | Qu, Shaojian | Dai, Zhenhua
Article Type: Research Article
Abstract: In group decision-making (GDM), when decision-makers (DMs) feel it is unfair, they may take uncooperative measures to disrupt the consensus-reaching process (CRP). On the other hand, it is difficult for the moderator to objectively determine each DM’s unit consensus cost and weight in CRP. Hence, this paper proposes data-driven robust maximum fairness consensus models (RMFCMs) to address these. First, this paper uses the robust optimization method to construct multiple uncertainty sets to describe the uncertainty of the DMs’ unit adjustment cost and proposes the RMFCMs. Subsequently, based on the DMs’ historical data, the DMs’ weights in the CRP are determined …by a data-driven method based on the kernel density estimation (KDE) method. Finally, this paper also applies the proposed models to the carbon emission reduction negotiation process between governments and enterprises, and the experimental results verify the rationality and robustness of the proposed consensus model. Show more
Keywords: Fairness, uncertain environment, consensus model, data-driven method
DOI: 10.3233/JIFS-237153
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-19, 2024
Authors: Akbas, Ayhan | Buyrukoglu, Gonca | Buyrukoglu, Selim
Article Type: Research Article
Abstract: Wireless Sensor Networks (WSNs) have garnered significant attention from both the academic and industrial communities. However, the limited battery capacity of WSN nodes imposes a set of restrictions on energy dissipations, which has compelled researchers to seek ways to save and minimize energy consumption. This paper presents a hybrid optimization model to minimize energy dissipation in Wireless Sensor Networks (WSNs). Employing linear programming and a combination of XGBoost and Random Forest algorithms, it effectively predicts internode distances and network lifetime. The results demonstrate significant energy savings in WSN deployments, outperforming traditional methods. This approach contributes to the field by offering …a practical, energy-efficient strategy for WSN configuration planning, highlighting the model’s applicability in real-world scenarios, where energy conservation is critical. Show more
Keywords: Wireless sensor networks, energy minimization, linear programming, optimization model, XGBoost, random forest
DOI: 10.3233/JIFS-234798
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Wei, Jingya | Ju, Yongfeng
Article Type: Research Article
Abstract: Due to the equipment error, environmental interference and data transmission delay of vehicle flow detection, the accuracy and real-time performance of vehicle perception and traffic flow data will be affected to some extent, resulting in poor traffic signal control effect. Therefore, a data-driven traffic signal adaptive control algorithm is designed by integrating vehicle perception and traffic flow data. To complete the modeling of urban traffic, the discrete distribution and continuous distribution of traffic are obtained. Based on this research environment, the DV-hop localization algorithm is improved to sense the vehicle position. Based on the phase space reconstruction of traffic flow …time series and vehicle location information, traffic flow data is predicted. Based on the driving of traffic data, the vehicle types are divided into small, medium and large three categories, and the impact weights are assigned respectively, and the weight values affecting the final allocation of green time are obtained to realize the allocation of green time. The experimental results show that: The research algorithm can not only predict the traffic flow intensity effectively, but also the predicted results are highly coincident with the actual traffic flow intensity. Vehicle arrival rates are higher, vehicle delays are shorter, and vehicles stop fewer times on average. Show more
Keywords: Vehicle perception, positioning algorithm, traffic flow prediction, data-driven, traffic signal adaptive control
DOI: 10.3233/JIFS-235654
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Xing, Zhenguo | Wu, Xiao | Li, Jiangjiang
Article Type: Research Article
Abstract: Purpose: aiming at the limitations of pre-input parameters in the complex network overlapping community discovery algorithm based on tag propagation in real networks and the problems of tag redundancy, method: a node degree increment-based proximal policy optimization method for community discovery in online social networks is proposed (named NDI-PPO). Process: by applying the cohesion idea and introducing the concept of modularity increment, a social network great community is constructed from the bottom up according to the criteria of community division. For the problem that the number of iterative steps is sensitive to the strategy gradient algorithm, we adopt an improved …PPO to improve the efficiency of feature extraction. In label updating, the maximum clique is used as the core unit to update the labels and weights of the maximum maximum clique adjacent nodes from the center to the periphery using intimacy, and the weights of the non-maximum maximum clique adjacent nodes are updated by means of the maximum weight. In the post-processing stage, the adaptive threshold method is used to remove the noise in the node label, which effectively overcomes the limitation of the number of pre-input overlapping communities in the real network. Result: The simulation results show that the proposed community discovery algorithm NDI-PPO is superior to other advanced algorithms, the time complexity is greatly reduced, and it is suitable for community discovery in large social networks. Show more
Keywords: Community discovery, node degree increment, proximal policy optimization, online social networks
DOI: 10.3233/JIFS-236587
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-9, 2024
Authors: Jayswal, Hardik S. | Chaudhari, Jitendra | Patel, Atul | Makwana, Ashwin | Patel, Ritesh | Dubey, Nilesh | Ghajjar, Srushti | Sharma, Shital
Article Type: Research Article
Abstract: A nation’s progress is directly linked to the effective functioning of its agricultural sector. The detection and classification of plant disease is an essential component of the agricultural industry. Plant diseases may result in substantial financial losses due to decreased crop production. As per the Food and Agriculture Organization of the United Nations, it is estimated that plant diseases result in a reduction of approximately 10-16% in global crop yields annually. Farmers are traditionally relying on visual inspection, using naked eye observation, as the primary method for detecting plant diseases. This involves a meticulous examination of crops to identify any …visible signs of diseases. However, manual disease detection can lead to delayed identification, resulting in significant crop losses. Various methods, coupled with machine learning classifiers, were demonstrated effectiveness in scenarios involving manual feature extraction and limited datasets. However, to handle larger datasets, deep learning models such as Inception V4, ResNet-152, EfficientNet-B5, and DenseNet-201 were studied and implemented. Among these models, DenseNet-201 exhibited superior performance and accuracy compared to the previous methodology. Additionally, A Fine-tuning Deep Learning Model called SympDense was developed, which surpassed other deep learning models in terms of accuracy. Show more
Keywords: Plant diseases, classification, deep learning, SympDense
DOI: 10.3233/JIFS-239531
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Yuan, Chao | Zhao, Ziqi
Article Type: Research Article
Abstract: With the acceleration of urbanization, the concept of smart city is rising gradually. Wireless sensor network as an important technical support of smart city, its application in environmental monitoring and water resources management has a profound impact on economic growth. Water resource is one of the most dependent resources for human beings. With the growth of world population and the rapid development of economy, water resource crisis is constant, water pollution, water shortage and water waste coexist. How to build a perfect water resource economic policy is a worldwide problem at present. At present, the formulation of water resources policies …is often based on experience or the knowledge system of decision makers. Due to the dynamic nature of water resources utilization and the incomplete information of decision makers, there are often policy failures, which affect economic growth. Based on this, this paper uses system dynamics model to study the mechanism of water resources management policies affecting economic growth by taking Gansu, Tianjin and Zhejiang as three qualitatively representative arid areas, transitional areas and water-rich areas. The research results show that under the same water resources policy coupling, different regions also have different eco-economic effects. The effect of coupled water resources policy is better than that of single water resources management policy. Show more
Keywords: Smart city, environmental monitoring, water resources management, economic growth
DOI: 10.3233/JIFS-242195
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Keswani, Vinay H. | Peshwe, Paritosh
Article Type: Research Article
Abstract: This paper presents the design of a novel multiparametric model aimed at improving sub-field scheduling performance for lithographic processes. The proposed model incorporates various parameters such as sub-field locations, conflict analysis, critical dimensions, delay, current, voltage, dose, and depth of current for optimization of scheduling operations. To achieve this, we have utilized both Genetic Algorithm (GA) and Q-learning algorithms to optimize the scheduling performance in real-time lithographic processes. The need for this work stems from the increasing demand for high precision lithographic processes, which require efficient scheduling operations to achieve optimal results. The proposed model has been tested on real-time …lithographic processes, and the results have been evaluated in terms of critical dimensions, scheduling performance, and scheduling efficiency. The results show that the proposed model has reduced critical dimensions by 8.5%, improved scheduling performance by 10.5%, and increased scheduling efficiency by 8.3% . These results demonstrate the efficacy of the proposed model in improving sub-field scheduling performance in lithographic processes. Based on the results it can be observed that this work presents a novel multiparametric model that utilizes GA and Q-learning algorithms to improve sub-field scheduling performance in lithographic processes. Show more
Keywords: Efficient, multiparametric, sub-field scheduling, GA, Q-Learning, optimizations
DOI: 10.3233/JIFS-233784
Citation: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]