Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 105.00Impact Factor 2024: 0.6
The field of intelligent decision technologies is interdisciplinary in nature, bridging computer science with its development of artificial intelligence, information systems with its development of decision support systems, and engineering with its development of systems. IDT seeks to attract research that is focused on applied technology while exploring its development at a fundamental level. IDT seeks an interchange of research on intelligent systems and intelligent technologies which enhance or improve decision-making in industry, government and academia. IDT publishes research on all aspects of intelligent decision technologies, from fundamental development to the applied system. The journal is concerned with theory, design, development, implementation, testing and evaluation of intelligent decision systems that have the potential to support decision making in the areas of management, international business, finance, accounting, marketing, healthcare, military applications, production, networks, traffic management, crisis response, human interfaces and other applied fields.
The target audience is researchers in computer science, business, commerce, health science, management, engineering and information systems that develop and apply intelligent decision technologies. Authors are invited to submit original unpublished work that is not under consideration for publication elsewhere.
Authors: Lin, Chunhua
Article Type: Research Article
Abstract: Deep learning (DL) is the basis of many applications of artificial intelligence (AI), and cloud service is the main way of modern computer capabilities. DL functions provided by cloud services have attracted great attention. At present, the application of AI in various fields of life is gradually playing an important role, and the demand and enthusiasm of governments at all levels for building AI computing capacity are also growing. The AI logic evaluation process is often based on complex algorithms that use or generate large amounts of data. Due to the higher requirements for the data processing and storage capacity …of the device itself, which are often not fully realized by humans because the current data processing technology and information storage technology are relatively backward, this has become an obstacle to the further development of AI cloud services. Therefore, this paper has studied the requirements and objectives of the cloud service system under AI by analyzing the operation characteristics, service mode and current situation of DL, constructed design principles according to its requirements, and finally designed and implemented a cloud service system, thereby improving the algorithm scheduling quality of the cloud service system. The data processing capacity, resource allocation capacity and security management capacity of the AI cloud service system were superior to the original cloud service system. Among them, the data processing capacity of AI cloud service system was 7.3% higher than the original cloud service system; the resource allocation capacity of AI cloud service system was 6.7% higher than the original cloud service system; the security management capacity of AI cloud service system was 8.9% higher than the original cloud service system. In conclusion, DL plays an important role in the construction of AI cloud service system. Show more
Keywords: Cloud service mode, deep learning, artificial intelligence, cloud service system construction
DOI: 10.3233/IDT-230150
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-12, 2023
Authors: Wu, Zongfu | Hou, Fazhong
Article Type: Research Article
Abstract: Due to the large scale and spatiotemporal dispersion of 3D (three-dimensional) point cloud data, current object recognition and semantic annotation methods still face issues of high computational complexity and slow data processing speed, resulting in data processing requiring much longer time than collection. This article studied the FPFH (Fast Point Feature Histograms) description method for local spatial features of point cloud data, achieving efficient extraction of local spatial features of point cloud data; This article investigated the robustness of point cloud data under different sample densities and noise environments. This article utilized the time delay of laser emission and reception …signals to achieve distance measurement. Based on this, the measured object is continuously scanned to obtain the distance between the measured object and the measurement point. This article referred to the existing three-dimensional coordinate conversion method to obtain a two-dimensional lattice after three-dimensional position conversion. Based on the basic requirements of point cloud data processing, this article adopted a modular approach, with core functional modules such as input and output of point cloud data, visualization of point clouds, filtering of point clouds, extraction of key points of point clouds, feature extraction of point clouds, registration of point clouds, and data acquisition of point clouds. This can achieve efficient and convenient human-computer interaction for point clouds. This article used a laser image recognition system to screen potential objects, with a success rate of 85% and an accuracy rate of 82%. The laser image recognition system based on spatiotemporal data used in this article has high accuracy. Show more
Keywords: Laser recognition, image system, spatiotemporal data, point cloud data
DOI: 10.3233/IDT-230161
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Wei, Bo | Chen, Huanying | Huang, Zhaoji
Article Type: Research Article
Abstract: In order to solve the problem of low accuracy of evaluation results caused by the impact of throughput and transmission delay on traditional systems in 6G networks, this paper proposes a design method of network security processing system in 5G/6gNG-DSS of intelligent model computer. Supported by the principle of active defense, this paper designs a server-side structure, using ScanHome SH-800/400 embedded scanning module barcode QR code scanning device as the scanning engine. We put an evaluation device on the RISC chip PA-RISC microprocessor. Once the system fails, it will send an early warning signal. Through setting control, data, and cooperation …interfaces, it can support the information exchange between subsystems. The higher pulse width modulator TL494:4 pin is used to design the power source. We use the top-down data management method to design the system software flow, build a mathematical model, introduce network entropy to weigh the benefits, and realize the system security evaluation. The experimental results show that the highest evaluation accuracy of the system can reach 98%, which can ensure user information security. Conclusion: The problem of active defense network security is transformed into a dynamic analysis problem, which provides an effective decision-making scheme for managers. The system evaluation based on Packet Tracer software has high accuracy and provides important decisions for network security analysis. Show more
Keywords: Active defense, network security assessment, packet tracer, scanning engine, software flow design, mathematical model
DOI: 10.3233/IDT-230143
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2023
Authors: Niu, Meng
Article Type: Research Article
Abstract: Electrical device automation in smart industries assimilates machines, electronic circuits, and control systems for efficient operations. The automated controls provide human intervention and fewer operations through proportional-integral-derivative (PID) controllers. Considering these devices’ operational and control loop contributions, this article introduces an Override-Controlled Definitive Performance Scheme (OCDPS). This scheme focuses on confining machine operations within the allocated time intervals preventing loop failures. The control value for multiple electrical machines is estimated based on the operational load and time for preventing failures. The override cases use predictive learning that incorporates the previous operational logs. Considering the override prediction, the control value is …adjusted independently for different devices for confining variation loops. The automation features are programmed as before and after loop failures to cease further operational overrides in this process. Predictive learning independently identifies the possibilities in override and machine failures for increasing efficacy. The proposed method is contrasted with previously established models including the ILC, ASLP, and TD3. This evaluation considers the parameters of uptime, errors, override time, productivity, and prediction accuracy. Loops in operations and typical running times are two examples of the variables. The learning process results are utilized to estimate efficiency by modifying the operating time and loop consistencies with the help of control values. To avoid unscheduled downtime, the discovered loop failures modify the control parameters of individual machine processes. Show more
Keywords: Control loop, electrical automation, override control, PID, predictive learning
DOI: 10.3233/IDT-230125
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2023
Authors: Xiao, Jian
Article Type: Research Article
Abstract: Machine learning algorithms have been widely used in risk prediction management systems for financial data. Early warning and control of financial risks are important areas of corporate investment decision-making, which can effectively reduce investment risks and ensure companies’ stable development. With the development of the Internet of Things, enterprises’ financial information is obtained through various intelligent devices in the enterprise financial system. Big data provides high-quality services for the economy and society in the high-tech era of information. However, the amount of financial data is large, complex and variable, so the analysis of financial data has huge difficulties, and with …the in-depth application of machine learning algorithms, its shortcomings are gradually exposed. To this end, this paper collects the financial data of a listed group from 2005 to 2020, and conducts data preprocessing and Feature selection, including removing missing values, Outlier and unrelated items. Next, these data are divided into a training set and a testing set, where the training set data is used for model training and the testing set data is used to evaluate the performance of the model. Three methods are used to build and compare data control models, which are based on machine learning algorithm, based on deep learning network and the model based on artificial intelligence and Big data technology proposed in this paper. In terms of risk event prediction comparison, this paper selects two indicators to measure the performance of the model: accuracy and Mean squared error (MSE). Accuracy reflects the predictive ability of the model, which is the proportion of all correctly predicted samples to the total sample size. Mean squared error is used to evaluate the accuracy and error of the model, that is, the square of the Average absolute deviation between the predicted value and the true value. In this paper, the prediction results of the three methods are compared with the actual values, and their accuracy and Mean squared error are obtained and compared. The experimental results show that the model based on artificial intelligence and Big data technology proposed in this paper has higher accuracy and smaller Mean squared error than the other two models, and can achieve 90% accuracy in risk event prediction, which proves that it has higher ability in controlling financial data risk. Show more
Keywords: Machine learning algorithm, financial risk control, big data control, cyborg sensation, Internet of Things
DOI: 10.3233/IDT-230156
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Zhao, Lihui
Article Type: Research Article
Abstract: Due to the lack of data security protection, a large number of malicious information leaks, which makes building information security (InfoSec) issues more and more attention. The construction information involves a large number of participants, and the number of construction project files is huge, leading to a huge amount of information. However, traditional network security information protection software is mostly passive, which is difficult to enhance its autonomy. Therefore, this text introduced data sharing algorithm in building InfoSec management. This text proposed an Attribute Based Encryption (ABE) algorithm based on data sharing, which is simple in calculation and strong in …encrypting attributes. This algorithm was added to the building InfoSec management system (ISMS) designed in this text, which not only reduces the burden of relevant personnel, but also has flexible control and high security. The experimental results showed that when 10 users logged in to the system, the stability and security of the system designed in this text were 87% and 91% respectively. When 20 users logged in to the system, the system stability and security designed in this text were 89% and 92% respectively. When 80 users logged in to the system, the system stability and security designed in this text were 94% and 95% respectively. It can be found that the stability and security of the system have reached a high level, which can ensure the security of effective management of building information. Show more
Keywords: Data sharing, attribute based encryption, construction information, security management system
DOI: 10.3233/IDT-230144
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Chen, Huanying | Wei, Bo | Huang, Zhaoji
Article Type: Research Article
Abstract: In the age of big data, electronic data has developed rapidly and gradually replaced traditional paper documents. In daily life, all kinds of data are saved in the form of electronic documents. In this regard, people have strengthened the development of electronic depository system. Electronic storage refers to the storage of actual events in the form of electronic data through information technology to prove the time and content of events. Its application scenarios are very extensive such as electronic contracts, online transactions and intellectual property rights. However, due to the vulnerability of electronic data, the existing electronic data depository system …has certain security risks, and its content is very easy to be tampered with and destroyed, resulting in the loss of depository information. Due to the complexity of the operation of the existing electronic data depository system, some users are likely to reduce the authenticity of the depository information due to the non-standard operation. In order to solve the problems existing in the current electronic data storage system, this paper designed an electronic data storage system based on cloud computing and blockchain technology. The data storage of cloud computing and blockchain was decentralized, and its content cannot be tampered with. It can effectively ensure the integrity and security of electronic information, which is more suitable for the needs of electronic storage scenarios. This paper first introduced the development of electronic data depository system and cloud computing, and optimized the electronic data depository system through the task scheduling model of cloud computing. Finally, the feasibility of the system was verified through experiments. The data showed that the functional efficiency of the system in the electronic data sampling point storage function, the upload of documents to be stored, the download of stored documents, the view of stored information function and the file storage and certificate comparison verification function has reached 0.843, 0.821, 0.798, 0.862 and 0.812 respectively. The final function indexes of each function of the traditional electronic data depository system were 0.619, 0.594, 0.618, 0.597 and 0.622 respectively. This data shows that the electronic data storage system based on cloud computing and blockchain modeling can effectively manage electronic data and facilitate relevant personnel to verify electronic data. Show more
Keywords: Electronic data, deposit system, cloud computing, blockchain technology, task scheduling model
DOI: 10.3233/IDT-230152
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Zhang, Zhenyuan
Article Type: Research Article
Abstract: With the rapid development of the economy, the demand for electric power is increasing, and the operation quality of the power system directly affects the quality of people’s production and life. The electric energy provided by the electric power system is the foundation of social operation. Through continuous optimization of the functions of the electric power system, the efficiency of social operation can be improved, and economic benefits can be continuously created, thereby promoting social progress and people’s quality of life. In the power system, the responsibility of the power distribution network (PDN) is to transmit electricity to all parts …of the country, and its transmission efficiency would directly affect the operational efficiency of the power system. PDN scheduling plays an important role in improving power supply reliability, optimizing resource allocation, reducing energy waste, and reducing environmental pollution. It is of great significance for promoting social and economic development and environmental protection. However, in the PDN scheduling, due to the inflexibility of the power system scheduling, it leads to the loss and waste of electric energy. Therefore, it is necessary to upgrade the operation of the PDN automatically and use automation technology to improve the operational efficiency and energy utilization rate of the power system. This article optimized the energy-saving management of PDN dispatching through electrical automation technology. The algorithm proposed in this paper was a distribution scheduling algorithm based on electrical automation technology. Through this algorithm, real-time monitoring, analysis, and scheduling of PDNs can be achieved, thereby improving the efficiency and reliability of distribution systems and reducing energy consumption. The experimental results showed that before using the distribution scheduling algorithm based on electrical automation technology, the high loss distribution to transformation ratios of power distribution stations in the first to fourth quarters were 21.93%, 22.95%, 23.61%, and 22.47%, respectively. After using the distribution scheduling algorithm, the high loss distribution to transformation ratios for the four quarters were 15.75%, 13.81%, 14.77%, and 13.12%, respectively. This showed that the algorithm can reduce the high loss distribution to transformation ratio of power distribution stations and reduce their distribution losses, which saved electric energy. The research results of this article indicated that electrical automation technology can play an excellent role in the field of PDN scheduling, which optimized the energy-saving management technology of PDN scheduling, indicating an advanced development direction for intelligent management of PDN scheduling. Show more
Keywords: Electrical automation, power distribution network scheduling, energy saving management, power energy, distribution dispatching algorithm
DOI: 10.3233/IDT-230121
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2023
Authors: Li, Guozhang | Xing, Kongduo | Alfred, Rayner | Wang, Yetong
Article Type: Research Article
Abstract: With the passage of time, the importance of spatio-temporal data (STD) is increasing day by day, but the spatiotemporal characteristics of STD bring huge challenges to data processing. Aiming at the problems of image information loss, limited compression ratio, slow compression speed and low compression efficiency, this method based on image compression. This article intended to focus on aircraft trajectory data, meteorological data, and remote sensing image data as the main research objects. The research results would provide more accurate and effective data support for research in related fields. The image compaction algorithm based on deep learning in this article …consisted of two parts: encoder and decoder, and this method was compared with the JPEG (Joint Photographic Experts Group) method. When compressing meteorological data, the algorithm proposed in this paper can achieve a maximum compaction rate of 0.400, while the maximum compaction rate of the JPEG compaction algorithm was only 0.322. If a set of aircraft trajectory data containing 100 data points is compressed to 2:1, the storage space required for the algorithm in this paper is 4.2 MB, while the storage space required for the lossless compression algorithm is 5.6 MB, which increases the compression space by 33.33%. This article adopted an image compaction algorithm based on deep learning and data preprocessing, which can significantly improve the speed and quality of image compaction while maintaining the same compaction rate, and effectively compress spatial and temporal dimensional data. Show more
Keywords: STD processing, picture data compaction, high-performance image compaction algorithms, compaction rate, JPEG compaction algorithm
DOI: 10.3233/IDT-230234
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Zhang, Xuxia | Chen, Weijie | Wang, Jian | Fang, Rang
Article Type: Research Article
Abstract: With the rapid development of information technology and the rapid popularization of the Internet, while people enjoy the convenience and efficiency brought about by new technologies, they are also suffering from the harm caused by cyber attacks. In addition to efficiently thwarting network assaults, a high volume of complicated security event data might unintentionally increase the strain of policy makers. At present, NS threats mainly include network viruses, trojans, DOS (Denial-Of-Service), etc. For the increasingly complex Network Security (NS) problems, the traditional rule-based network monitoring technology is difficult to predict the unknown attack behavior. Environment-based, dynamic and integrated data fusion …can integrate data from a macro perspective. In recent years, Machine Learning (ML) technology has developed rapidly, which could easily train, test and predict existing third-party models. It uses ML algorithms to find out the association between data rather than manually sets rules. Support vector machine is a common ML method, which can predict the security of the network well after training and testing. In order to monitor the overall security status of the entire network, NS situation awareness refers to the real-time and accurate reproduction of network attacks using the reconstruction approach. Situation awareness technology is a powerful network monitoring and security technology, but there are many problems in the existing NS technology. For example, the state of the network cannot be accurately detected, and its change rule cannot be understood. In order to effectively predict network attacks, this paper adopted a technology based on ML and data analysis, and constructed a NS situational awareness model. The results showed that the detection efficiency of the model based on ML and data analysis was 7.18% higher than that of the traditional NS state awareness model. Show more
Keywords: Network security, machine learning algorithm, situation awareness, data analysis
DOI: 10.3233/IDT-230238
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-13, 2023
Authors: Chen, Haifeng
Article Type: Research Article
Abstract: With the rapid development of the economy, the power supply has also shown an increasing trend year by year, and many loopholes and hidden dangers have emerged during the operation of the power grid. The power grid may be subject to malicious attacks, such as hacker attacks, power theft, etc. This may lead to security risks such as power grid system paralysis and information leakage. In order to ensure the quality of power supply, it is necessary to optimize the distribution of electricity and improve power supply efficiency. This article pointed out the security performance issues of power Internet of …Things (IoT) terminals and analyzed the design and implementation of a vulnerability mining system for power IoT terminals based on a fuzzy mathematical model simulation platform. This article used a fuzzy mathematical model to quantitatively evaluate the security performance of power IoT terminals, providing an effective theoretical basis for vulnerability mining. Based on the analysis of vulnerability mining technology classification and vulnerability attack process, this article characterizes vulnerability parameters through fuzzy mapping. Based on the collected vulnerability data and the online and device status of power IoT terminals, fuzzy logic inference is used to determine and mine potential vulnerability issues in power IoT terminals. This article aimed to improve the security performance of power IoT terminals and ensure the safe and stable operation of the power system. By testing the number of system vulnerabilities, vulnerability risk level, and vulnerability mining time of the power IoT terminal vulnerability mining system based on fuzzy mathematical models, it was found that the power IoT simulation platform based on fuzzy mathematical models has fewer terminal vulnerabilities. The fuzzy mathematical model can reduce the vulnerability risk level of the power IoT simulation platform system, and the time required for vulnerability mining was reduced; the time was reduced by 0.48 seconds, and the speed of vulnerability mining was improved. Fuzzy mathematical models can promote the development of the power industry, which provides strong support for the security protection of power IoT terminals. Show more
Keywords: Power simulation platform, Internet of Things terminals, vulnerability mining systems, fuzzy mathematical model, fuzz test, fuzzy logic
DOI: 10.3233/IDT-230241
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-11, 2023
Authors: Ding, Haijuan | Zhao, Chengtao | Zhao, Debiao | Dong, Hairong
Article Type: Research Article
Abstract: An electrical Discharge Machine (EDM) is an effective spark machine that passes the sparks to get the desired shape by performing metal fabrication. In the EDM process, materials are removed between the two electrodes by transferring the workpiece’s high electric voltage and dielectric liquid. The voltage between the two electrodes is gradually increased to break down the dielectrics and remove the materials from the surface. EDM utilizes several parameters to regularize the material removal rate during this process. EDM changes its parameter for every experiment to minimize the Average Tool Wear Rate (ATWR). The existing techniques utilize a neural model …to optimize the EDM parameter. However, the traditional approaches fail to perform the fine-tuning process that affects the Material Removal Rate (MRR) and ATWR. Therefore, meta-heuristics optimization techniques are incorporated with the neural model to enhance the EDM parameter optimization process. In this work, the EDM experimental data have been collected and processed by the learning process to create the training pattern. Then, the test data is investigated using the Backpropagation Neural Model (BPM) to propagate the neural parameters. The BPM model is integrated with the Butterfly Optimization Algorithm (BOA) to select the search space’s global parameters. This analysis clearly shows an 8.93% maximum prediction rate, 0.023 minimum prediction rate and 2.83% mean prediction rate while investigating the different testing patterns compared to other methods. Show more
Keywords: Electrical Discharging Machine (EDM), Material Removal Rate (MRR), Unsupervised Pre-trained Neural Model (UPNM), Backpropagation Neural Model (BPM), Butterfly Optimization Algorithm (BOA)
DOI: 10.3233/IDT-230157
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2023
Authors: Wang, Jun
Article Type: Research Article
Abstract: From the perspective of practical development, under the premise of stable macroeconomic growth in society, influenced by spatiotemporal factors, regional economies inevitably have differences and changes, which affect various aspects of social production and life. In order to understand the spatiotemporal data evolution characteristics of regional economy, promote common regional development and the implementation of coordinated economic development strategies, this article takes the Beijing Tianjin Hebei (BTH for short here) region as an example. By combining spatial econometric models (SEM for short here), this article collects and processes economic development data from 2013 to 2022 in the BTH region, and …introduced a spatial weight matrix to conduct High-performance computing and analysis of its regional economic spatial correlation. Based on this, this article conducted in-depth research on the spatiotemporal data evolution characteristics of the BTH regional economy through the description and quantitative analysis of the influencing factors of the BTH regional economy. The empirical analysis results showed that the global Moran index (Global Moran’s for short here) of the BTH region was positive from 2013 to 2022, and the Z -values were all greater than 1.96, indicating a significant spatial correlation in the BTH regional economy. There is an imbalance in economic development in the BTH region, but with the continuous development of the region, its economic balance has improved. Show more
Keywords: Regional economies, spatial econometrics, evolution of spatiotemporal data, BTH region
DOI: 10.3233/IDT-230169
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Lei, Gang | Wu, Junyi | Gu, Keyang | Jiang, Fan | Li, Shibin | Jiang, Changgen
Article Type: Research Article
Abstract: In the era of rapid development of modern internet technology, network transmission techniques are continuously iterating and updating. The Quick UDP Internet Connections (QUIC) protocol has emerged as a timely response to these advancements. Owing to the strong compatibility and high transmission speed of QUIC, its extended version, Multipath QUIC (MPQUIC), has gained popularity. MPQUIC can integrate various transmission scenarios, achieving parallel transmission with higher bandwidth. However, due to some security flaws in the protocol, MPQUIC is susceptible to attacks from anomalous network traffic. To address this issue, we propose an MPQUIC traffic anomaly detection model based on Empirical Mode …Decomposition (EMD) and Long Short-Term Memory (LSTM) networks, which can decompose and denoise data and learn the long-term dependencies of the data. Simulation experiments are conducted by obtaining MPQUIC traffic data under normal and anomalous conditions for prediction, analysis, and evaluation. The results demonstrate that the proposed model exhibits satisfactory prediction performance when trained on both normal and anomalous traffic data, enabling anomaly detection. Moreover, the evaluation metrics indicate that the EMD-LSTM-based model achieves higher accuracy compared to various traditional single models. Show more
Keywords: Multipath QUIC, network traffic, anomalous detection model, empirical mode decomposition, long short-term memory
DOI: 10.3233/IDT-230261
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-22, 2024
Authors: Zou, Wenjing | Xu, Huan | Yang, Qiuyong | Dong, Can | Su, Wenwei
Article Type: Research Article
Abstract: Currently, the data management of power enterprises faces the need to analyze data sources from multiple places. However, traditional multi-source data fabric systems have problems such as low analysis efficiency and high error rates, which brings great inconvenience to the data analysis of power enterprises. In order to improve the accuracy and efficiency of data analysis in data structure systems, the intelligent system architecture is applied to the construction of source data structure systems. The main modules are data collection, data matching, data integration, and data analysis. This article uses simulated annealing genetic algorithm to perform high-performance calculations on system …timing data, thus achieving data matching. This article conducted data level data integration, feature level data integration, and decision level data integration. The access survey method was used to analyze the current data management problems faced by power companies. The evaluation and analysis of general multi-source data fabric systems and multi-source data fabric systems based on intelligent system architecture were conducted using the evaluation panel evaluation method. The analysis results showed that the operational convenience of the multi-source data fabric system based on intelligent system architecture could reach 60%–80%, which greatly improved compared to general multi-source data fabric systems; the information sharing of multi-source data fabric systems based on intelligent system architecture was greatly improved; the data processing efficiency of general multi-source data fabric systems was much lower than that of multi-source data fabric systems based on intelligent system architecture; however, the symmetry of data collection and matching in the multi-source data fabric system based on intelligent system architecture was slightly insufficient, and further improvement was still needed. In order to benefit more power companies through the intelligent system architecture based multi-source data fabric system, it was necessary to strengthen the management of data collection and matching symmetry. Show more
Keywords: Data fabric, intelligent system architecture, multi-source data, electricity companies, simulated annealing genetic algorithm
DOI: 10.3233/IDT-230240
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Liu, Bing | Li, Xianzhong | Li, Zheng | He, Peidong
Article Type: Research Article
Abstract: With the increasing Power Load (PL), the operation of the power system is facing increasingly severe challenges. PL control is an important means to ensure the stability of power system operation and power supply quality. However, traditional PL control methods have limitations and cannot meet the requirements of load control in the new era of power systems. This is because with the development of modern industry and commerce, the demand for electricity is gradually increasing. This article constructed a PL control and management terminal operating system based on machine learning technology to achieve intelligent management of PL, so as to …improve the operational efficiency and power supply quality of the power system. This article identified the design concept of a PL control management terminal operating system based on machine learning technology by reviewing the current research status of PL control technology. Based on the operational characteristics and data characteristics of the power system, this article selected suitable machine learning algorithms to process and analyze load data, and established a prototype of a PL control and management terminal operating system based on machine learning technology, so as to realize intelligent processing and analysis of load data and conduct experimental verification. The experimental results show that through the comparative study of 6 sets of data in the tertiary level, the difference between the system and the real tertiary level is 0.079 kw, 0.005 kw and 0.189 kw respectively. Therefore, therefore, the average difference between the predicted value and the measured value of the PL system is about 0.091 kw. This indicated that the system had high accuracy and real-time performance in predicting PL, which could effectively improve the load control efficiency and power supply quality of the power system. The PL control management terminal operating system based on machine learning technology constructed in this article provided new ideas and methods for the development of PL control technology. In the future, system algorithms can be further optimized and a more intelligent PL control and management terminal operating system can be constructed to cope with the growing PL and increasingly complex power system operating environment. Show more
Keywords: Machine learning technology, power load, management terminal, operation systemï¼ power load forecasting model
DOI: 10.3233/IDT-230239
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Li, Yujie
Article Type: Research Article
Abstract: In response to the problems of insufficient data management and low construction quality in architectural interior design, high-performance computing technology was proposed to be applied to optimize the overall effect of architectural interior design. This article proposed some solutions to the problems in interior design of buildings, and applied high-performance computing technology to building design materials, building operation data processing, etc. Furthermore, combined with multi-objective genetic algorithm, relevant experimental tests were conducted on the optimization of interior space environment design in buildings. The experimental results showed that under the algorithm proposed in this paper, the average health standard compliance of …each scheme reached 92.17%, and the average emotional requirement satisfaction reached 88.42%. The average interface detail attention reaches 84.65%, and the average environmental comfort reaches 92.87%. Under the genetic algorithm, the average compliance of health standards for each scheme was 88.14%, and the average satisfaction of emotional needs was 84.17%. The average attention to interface details was 81.18%, and the average environmental comfort was 90.30%. From the above data, it can be seen that the algorithm proposed in this paper can achieve good optimization results in the design of indoor space environment in buildings. Show more
Keywords: Big data, high performance computing, architectural interval design, space environment design
DOI: 10.3233/IDT-230171
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Lichode, Rupatai | Karmore, Swapnili
Article Type: Research Article
Abstract: Incremental learning relies on the availability of ample training data for novel classes, a requirement that is often unfeasible in various application scenarios, particularly when new classes are rare groups that are pricey or challenging to attain. The main focus of incremental learning is on the tricky task of continuously learning to classify new classes in incoming data with no erasing knowledge of old classes. The research intends to develop a comparative analysis of optimization algorithms in training few-shot continual learning models to conquer catastrophic forgetting. The presented mechanism integrates various steps: pre-processing and classification. Images are initially pre-processed through …contrast enhancement to elevate their quality. Pre-processed outputs are then classified by employing Continually Evolved Classifiers, generated to address a matter of catastrophic forgetting. Furthermore, to further enhance performance, Serial Exponential Sand Cat Swarm optimization algorithm (SE-SCSO) is employed and compared against ten other algorithms, containing Grey Wolf Optimization (GWO) algorithm, Moth flame optimization (MFO), cuckoo Search Optimization Algorithm (CSOA), Elephant Search Algorithm (ESA), Whale Optimization Algorithm (WOA), Artificial Algae Algorithm (AAA), Cat Swarm Optimization (CSO), Fish Swarm Algorithm (FSA), Genetic Bee Colony (GBC) Algorithm, and Particle swarm optimization (PSO). From the experiment results, SE-SCSO had attained the maximum performance with an accuracy of 89.6%, specificity of 86%, precision of 83%, recall of 92.3% and f-measure of 87.4%. Show more
Keywords: Few shot, continual learning, classifier, optimization algorithm, learning
DOI: 10.3233/IDT-240543
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-21, 2024
Authors: Sonawane, Sandip | Patil, Nitin N.
Article Type: Research Article
Abstract: In the face of a growing global population, optimizing agricultural practices is crucial. One major challenge is weed infestation, which significantly reduces crop yields and increases production costs. This paper presents a novel system for weed-crop classification and image detection specifically designed for sesame fields. We leverage the capabilities of Convolutional Neural Networks (CNNs) by employing and comparing different modified YOLO based object detection models, including YOLOv8, YOLO NAS, and the recently released Gold YOLO. Our investigation utilizes two datasets: a publicly available weed image collection and a custom dataset we meticulously created containing sesame plants and various weed species …commonly found in sesame fields. The custom dataset boasts a significant size of 2148 images, enriching the training process. Our findings reveal that the YOLOvv8 model surpasses both YOLO NAS and Gold YOLO in terms of key evaluation metrics like precision, recall and mean average precisions. This suggests that YOLOv8 demonstrates exceptional potential for real-time, on-field weed identification in sesame cultivation, promoting informed weed management strategies and ultimately contributing to improve agricultural yield. Show more
Keywords: Object detection, sesame weed, image processing, classification, robotics weeding
DOI: 10.3233/IDT-240978
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Gao, Liping | Lin, Zhouyong | Lian, Dade | Lin, Yilong | Yang, Kaixuan | Li, Zhi | Li, Letian
Article Type: Research Article
Abstract: High performance computing is a theory, method, and technology that utilizes supercomputers to achieve parallel computing, and is widely applied in various fields. The problem of thermal deviation and overheating tube explosion on the high-temperature heating surface of coal-fired boilers in large power plants is becoming increasingly prominent, seriously threatening the safe operation of power plant units. In order to solve these problems, an intelligent monitoring method for the overheating status of coal-fired boilers in large power plants is proposed. Analyze the damage mechanism of the high-temperature heating surface of the boiler, and monitor in real-time the temperature exceeding the …limit of the heating surface; Constructing gas-phase turbulent combustion models and metal wall temperature models; Analyze the energy consumption loss during the operation of coal-fired boilers, calculate the heat transfer function of the boiler, and complete the intelligent monitoring of the overheating status of the boiler heating surface. The experimental results show that the rationality of the research method is optimal at a relative height of 100 m; The equivalent heat flux density of the boiler has been improved, and the ability to control consumption losses is better; It can improve the wind power consumption capacity of the heating system. Show more
Keywords: Large power station, coal-fired boiler, heating surface, overtemperature state, intelligent monitoring method
DOI: 10.3233/IDT-240502
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Zhao, Mingguan | Li, Meng | Dong, Xinsheng | Yang, Yang | Wang, Hongxia | Ni, Yunlong
Article Type: Research Article
Abstract: With the explosive growth of electricity consumption, the demand for electricity by electricity users is increasing. As a core component of power supply, the safe and stable operation of transmission lines plays an important role in the normal operation of the entire power system. However, traditional monitoring methods for transmission line operation status face challenges such as limited accuracy, lack of real-time feedback, and high operational costs. In this paper, the Firefly algorithm is used to monitor the running status of transmission lines. Through synchronous testing with the traditional particle swarm optimization algorithm, it is found that the average accuracy …of the Firefly algorithm in voltage and current measurement is improved to 93.13% and 93.66% respectively, which is better than the traditional algorithm. Firefly algorithm shows high precision in various power equipment monitoring, the average monitoring accuracy is 95.62% and 93.06%, respectively, which proves that it has stronger performance in transmission line monitoring and can achieve more stringent monitoring requirements. Through the comparison experiment of the algorithm, it proved that the Firefly algorithm had a strong performance in the transmission line operation status monitoring, and could more accurately identify the transmission line fault, which provided a new idea and new method for the safe operation status monitoring of transmission lines. Show more
Keywords: Transmission line, firefly algorithm, state detection, particle swarm algorithm, power system
DOI: 10.3233/IDT-240211
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Shete, Suwarna | Kumawat, R.K.
Article Type: Research Article
Abstract: SOC is a crucial parameter in battery management systems (BMS), indicating the remaining amount of charge in a battery. Longer battery life and the removal of catastrophic battery damage are the results of accurate SOC assessment. Furthermore, it is crucial to have a dependable and precise estimation of SoC for an effective EV operation. Therefore, the lithium battery represents a characteristic nonlinear system, as well as the Extended Kalman Filter (EKF) algorithm proves to be a viable approach for SOC estimation. It is necessary to develop a new model called Modified EKF for SOC estimate based on EKF and KF …in order to improve the stability and accuracy of the anticipated SOC. In this instance, the statistical cumulative error is utilized to determine the SOC using the Extended Kalman filter. Higher-order statistical characteristics including homogeneity, skewness, kurtosis, contrast, and entropy are taken into consideration in order to calculate the error involved in the estimation of SOC. Here, the suggested plan is to be simulated in MATLAB, and the temporal efficacy of the suggested approach is verified. The estimated Modified-EKF MAE is 0.2681%, the estimated SoC error RMSE is 0.34051%, and the estimated MSE is 0.11595%. Show more
Keywords: Li-ion battery, battery management system, electric vehicles, Kalman Filter, SoC-Estimation
DOI: 10.3233/IDT-240735
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Kumar, Anupam | Ahmad, Faiyaz | Alam, Bashir
Article Type: Research Article
Abstract: Inspired by the fundamentals of biological evolution, bio-inspired algorithms are becoming increasingly popular for developing robust optimization techniques. These metaheuristic algorithms, unlike gradient descent methods, are computationally more efficient and excel in handling higher order multi-dimensional and non-linear. OBJECTIVES: To understand the hybrid Bio-inspired algorithms in the domain of Medical Imaging and its challenges of hybrid bio-inspired feature selection techniques. METHOD: The primary research was conducted using the three major indexing database of Scopus, Web of Science and Google Scholar. RESULT: The primary research included 198 articles, after removing the 103 duplicates, …95 articles remained as per the criteria. Finally 41 articles were selected for the study. CONCLUSION: We recommend that further research in the area of bio-inspired algorithms based feature selection in the field of diagnostic imaging and clustering. Additionally, there is a need to further investigate the use of Deep Learning hybrid models integrating the bio-inspired algorithms to include the strengths of each models that enhances the overall hybrid model. Show more
Keywords: Bio-inspired optimization, feature selection, metaheuristics, literature review, hybrid algorithms
DOI: 10.3233/IDT-241023
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Zhang, Zijian
Article Type: Research Article
Abstract: Intelligent city is a product of the deep integration of information technology, industrialization and urbanization, which has a large number of intelligent mechanical products. The users widely evaluate their application characteristics, and the selection of mechanical products based on user evaluation has become a trend. Nowadays, personalized mechanical product recommendation based on user evaluation is more and more widely used. However, due to the sparse evaluation data, the recommendation accuracy needs to be improved. In this paper, the principle of matrix decomposition is deeply analyzed in order to provide useful ideas for solving this problem. The bias weight hybrid recommendation …model of user preference and rating object characteristics is proposed, and the corresponding hybrid recommendation algorithm is designed. First, estimated data obtained using the matrix decomposition principle is supplemented to the sparse data matrix. Secondly, according to the characteristics of users and ratings, initial positions were set based on the statistical distribution of high-performance computing data, and bias weights were set by incorporating each feature. Finally, the nonlinear learning ability of deep neural network learning is used to enhance the classification effectiveness. Practice has proved that the constructed model is reasonable, the designed algorithm converges fast, the recommendation accuracy is improved by about 10%, and the model better alleviates the problem of sparse scoring data. The practical application is simple and convenient, and has good application value. Show more
Keywords: Deep neural networks, high performance computing, sparsity, bias weight, intelligent city, fast setting, hybrid recommendation
DOI: 10.3233/IDT-240529
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-10, 2024
Authors: Chen, Junxiu | Xie, Weican
Article Type: Research Article
Abstract: For traffic management entities, the ability to forecast traffic patterns is crucial to their suite of advanced decision-making solutions. The inherent unpredictability of network traffic makes it challenging to develop a robust predictive model. For this reason, by leveraging a spatiotemporal graph transformer equipped with an array of specialized experts, ensuring more reliable and agile outcomes. In this method, utilizing Louvain algorithm alongside a temporal segmentation approach partition the overarching spatial graph structure of traffic networks into a series of localized spatio-temporal graph subgraphs. Then, multiple expert models are obtained by pre-training each subgraph data using a spatio-temporal synchronous graph …transformer. Finally, each expert model is fused in a fine-tuning way to obtain the final predicted value, which ensures the reliability of its forecasts while reducing computational time, demonstrating superior predictive capabilities compared to other state-of-the-art models. Results from simulation experiments on real datasets from PeMS validate its enhanced performance metrics. Show more
Keywords: Traffic flow prediction, intelligent decision technologies, louvain algorithm, expert models, fine-tuning
DOI: 10.3233/IDT-240690
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Zhang, Lan | Luo, Yanling | Zhao, Yue
Article Type: Research Article
Abstract: The rapid development of mobile communication technology not only brings great convenience to users, but also brings the risk of user data privacy leakage. Due to the broadcasting nature of mobile communication transmission, open wireless interfaces have become a security vulnerability for mobile devices such as mobile phones, which can be easily eavesdropped. This article studied a data privacy protection algorithm suitable for mobile communication in cellular networks based on the anonymous mechanism of blockchain. This article first analyzes the overall framework and anonymity technology of blockchain data from the perspective of privacy and queryability. Then, based on blockchain technology, …consistency mechanisms, privacy control, and access rights management are designed. Finally, mobile communication data privacy protection algorithms are designed and implemented through data encryption and verification. The latency of transactions under different task volumes and throughput at different nodes were analyzed to verify the reliability of several anonymous mechanisms in cellular network mobile communication data privacy protection in blockchain. Based on the experimental results, it was concluded that the reliability range of CryptoNote and the zero coin and zero currency protocol mechanism in the specified nodes was between 92 and 99%. This article utilized blockchain technology to distribute mobile communication data in nodes and achieve decentralized data transmission, thus protecting the privacy of wireless network communication data. By analyzing the reliability of anonymous mechanisms based on blockchain in mobile communication nodes, it was concluded that this method had certain research value. Show more
Keywords: Cellular networks, mobile communications, data privacy, blockchain technology, privacy protection, byzantine algorithm
DOI: 10.3233/IDT-230233
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Kang, Guang | Guo, Zhe
Article Type: Research Article
Abstract: With the globalization of the economy, the challenges of financial risk management continue to grow, and the current traditional algorithms are often limited by the lack of search capability and diversity maintenance, which makes it difficult to predict as well as manage financial risks. Therefore, a multi-population multi-objective root system growth algorithm is proposed. The algorithm uses the plant root tip position and growth state as heuristic information to guide the search process. It also introduces adaptive search space to adjust the parameters, a multi-swarm strategies to enhance the exploration ability, and multi-objective optimization to adjust the weight balance among …the objectives. The experimental results showed that in the single objective optimization function, the mean value of RSGA model was 5.80E-20, the standard deviation was 1.29E-19, the best position was 2.90E-26, and the worst position was 2.89E-19. In the biobjective optimization function, the average IGD of RSGA model was 2.28E-3. In the three-objective optimization function, the average IGD and HV of RSGA model were 1.05E-1 and 6.53E-1 respectively. In financial risk prediction, the best risk of RSGA model in small-scale investment was 0.1961, the worst risk was 0.2483, and the average risk was 0.2236. The best risk of medium-scale investment was 0.3057, the worst risk was 0.3387, and the average risk was 0.3194. In large-scale investment, the best risk was 0.191, the worst risk was 1.8795, and the standard deviation was 0.3769. Under MV portfolio, the maximum HV value of RSGA model was 1.13E-1, the minimum HV value was 4.20E-1, the average value was 8.74E-1, and the standard deviation was 5.46E-1. Under the RRC portfolio, the maximum HV of RSGA model was 1.49E-0, the minimum was 3.63E-1, the average was 8.17E-1, and the standard deviation was 3.95E-1. Show more
Keywords: Root system growth algorithm, adaptive search spaces, multi-swarm strategies, financial risk control, hypervolume indices
DOI: 10.3233/IDT-240687
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-20, 2024
Authors: Ozturk, Bilal A. | Namiq, Heba Emad | Rasool, Hussein Ali | Rane, Milind | Waghmare, Gayatri | Nangare, Akshata | Salem, Mahmoud Jamil
Article Type: Research Article
Abstract: Early detection and diagnosis are critical for effectively treating Diabetic retinopathy (DR), a severe vision-threatening diabetes-related challenge. We introduced an innovative technique that employed algorithms for deep learning for the automatic identification of DR. The significance of the proposed model lies in its capacity to rapidly and accurately diagnose DR, enabling prompt medical intervention to prevent visual impairment. Here we implemented multiple pre-processing techniques, including Top-hat filtering, median filtering, CLAHE, and Gaussian filtering. These techniques notably improved the accuracy diabetic retinopathy detection, making a contribution to the medical image analysis field. The performance evaluation conducted on the dataset APTOS 2019 …has yielded results regarding accuracy, sensitivity and also specificity. These findings highlight the efficiency of our technique in world applications for DR detection. For our experimentation we utilized the APTOS 2019 dataset consisting of 1299 image files for DR training and 279 image files, for DR testing. Show more
Keywords: Diabetic retinopathy, CNN, deep learning, VGG 16, inception V3
DOI: 10.3233/IDT-241037
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Vyas, Vijaykumar | Raiyani, Ashwin
Article Type: Research Article
Abstract: This work is to present a new approach – the Resource Allocation Weighted Random Walk (RA-WRW) algorithm, based on IOTA-Distributed Ledger Technology (DLT), for the optimization of transaction processing within the IOTA network. The objectives of improved execution time, better CPU usage, enhanced network efficiency, and better scalability are met in accordance with stringent security measures. The Python-based algorithm considers node resources and transaction weights for the selection of the best tips. The authentication operation of the sender with private keys ensures the integrity of the data, while verification procedures confirm the authenticity of the tips and the validity of transactions. …Implementation of this algorithm greatly improves the efficiency of IOTA network transaction processing. The experiment is run on a commonly used dataset available in Kaggle and some system-specific configurations, which depicts a significant improvement in execution time, CPU usage, network efficiency, and scalability. The tips selected are very authentic and consistent, thus proving the efficacy of this algorithm. It proposes a new RA-WRW algorithm based on IOTA-DLT, efficiently fusing resource allocation with weighted random walk strategies for improving the security, efficiency, and scalability in distributed ledger transactions. This has been a colossal development toward the betterment of processing transactions across the IOTA network and feels the pulse of such a newer approach in applications across the real world. Show more
Keywords: Datasets, transaction weights, random walk, tip selection, private key
DOI: 10.3233/IDT-240981
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Zhang, Lin
Article Type: Research Article
Abstract: The recognition of sports action is an important research subject, which is conducive to the improvement of athletes’ own level. To improve the accuracy of multi-modal data action recognition, based on the Transformer module, this study introduces a multi-head attention mechanism, fuses multi-modal data, and constructs a multi-stream structured object relationship inference network. Based on PointNet++ network and combining five different data fusion frameworks, a motion recognition model that integrates RGB data and 3D skeleton point cloud is constructed. The results showed that the Top-1 accuracy of multi-stream structured object relationship inference network was 42.5% and 42.7%, respectively, which was …better than other algorithms. The accuracy of the multi-modal fusion model was improved by 15.6% and 5.1% compared with the single mode, and by 5.4% and 2.6% compared with the dual mode, which showed its superiority in the action recognition task. This showed that the fusion of multi-modal data can provide more abundant information, so as to improve the accuracy of action recognition. The accuracy of the action recognition model combining RGB data and 3D skeleton point cloud was 84.3%, 87.5%, 90.2%, 90.6% and 91.2% after the combination of different strategies, which effectively compensated for the problem of missing information in 3D skeleton point cloud and significantly improved the accuracy of action recognition. With a small amount of data, the Top-1 accuracy of the multi-stream structured object relationship inference network in this study was superior to other algorithms, showing its advantages in dealing with complex action recognition tasks. In addition, the action recognition model that fuses RGB data and 3D skeleton point cloud also achieved higher accuracy, which is better than other algorithms. This study can meet the needs of motion recognition in different scenarios and has certain reference value. Show more
Keywords: Multi-modal data, action recognition, transformer, RGB data, 3D skeleton point cloud
DOI: 10.3233/IDT-230372
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Ma, Qing
Article Type: Research Article
Abstract: In response to the difficulties in integrating multimodal data and insufficient model generalization ability in traditional cross-modal knowledge transfer, this article used the Transformer model to explore it in the new generation learning space. Firstly, the article analyzed the processing methods of data and models in cross-modal knowledge transfer, and explored the application of Transformer models in the learning space. This model used natural language processing to represent and extract textual features, Mel Frequency Cepstral Coefficients (MFCCs) to represent and extract audio features, and Faster R-CNN (Faster Region-based Convolutional Neural Network) to represent and extract image features. The article also …discussed the implementation process of the Transformer model functionality. The experiment used data from four datasets, including Quora Question Pairs, to test the performance of the model’s cross-modal knowledge transfer through intelligent question answering and task analysis. In single type data testing, the accuracy and recall of the model in this article were better than the comparison model in the three types of data. The highest accuracy and recall in the test set were 91% and 93%, respectively. In the most challenging multimodal intelligent question answering test, the speech-image question answering method achieved an accuracy rate of 89% in answering open questions, indicating that the model had good multimodal data fusion ability. In the analysis experiment of 6 homework prone knowledge points on images with text annotations, the induction accuracy reached 85%, indicating that the model had strong generalization ability. The experimental results showed that the Transformer model had good cross-modal knowledge transfer performance, providing a reference for subsequent research on cross-modal knowledge transfer in the new generation learning space. Show more
Keywords: Cross-modal knowledge transfer, transformer model, new generation learning space, Mel frequency cepstral coefficients, faster R-CNN
DOI: 10.3233/IDT-240169
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Umale, Bhagyashree Ramesh | Sharma, Pooja
Article Type: Research Article
Abstract: The contemporary fog computing paradigm evolved from traditional cloud computing to give storage and computation resources at the network’s edge. Fog enabled vehicular processing is anticipated to be a fundamental component that may expedite a variety of applications, notably crowd-sensing, when applied to vehicular networks. As a result, the confidentiality and safety of vehicles participating in the crowd-sensing platform are now recognized as critical challenges for smart police and cyber defence. Furthermore, sophisticated access control is essential to meet the requirements of crowd-sensing users of information. This work presents a novel secured fog-based protocol for vehicular crowd sensing utilizing an attribute-based encryption …model. The proposed framework incorporates a two-layered fog architecture termed fog layer A and fog layer B. Fog layer A includes data owners (vehicles) and fog nodes, while fog layer B consists of fog nodes integrated with Road Side Units (RSUs). The Transport Triggered Architecture (TTA) governs the retrieval of regular or specialized data as per the request of data users. The data collection procedure varies based on the type of data being processed. Regular data is gathered from data owners, aggregated using the Multiple Kernel Induced in Kernel Least Mean Square (MKI-KLMS) technique, which employs kernel least mean square and hierarchical fractional bidirectional least mean square methods to handle redundancy in data aggregation. Subsequently, the aggregated data is encrypted using the Proposed Fusion Key for Encryption (PFKE) technique, which employs a fusion key for message encryption. Additionally, an enhanced Blowfish algorithm is applied for an extra layer of encryption on the already encrypted data. The encrypted data is then transmitted to the TTA. The decryption process utilizes the same key to retrieve the original message. The Modified Attribute based Encryption Model (MAEM) scheme achieves the highest efficiency of 90.9476. Show more
Keywords: Data encryption, decryption process, efficiency, protocol, redundancy and Blowfish algorithm
DOI: 10.3233/IDT-240439
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-23, 2024
Authors: Chen, Zhenna
Article Type: Research Article
Abstract: In high-performance computing (HPC) systems, energy and power considerations play an increasingly important role. This work aims to ensure the implementation of China’s green and ecological concepts, respond to China’s strategy of building an environment-friendly and resource-saving society, and actively promote the construction of sustainable development campuses. First, the theoretical basis of sustainable energy campus architecture is described. Next, the teaching feedback model under HPC is constructed. Finally, with the evaluation results of students’ task completion processes and students’ task works as measurement indexes, the corresponding data is collected and comprehensively evaluated and analyzed to verify the effectiveness of the …model. The analysis results indicate that most students are able to complete tasks within two hours; however, their proficiency with multimedia technology is inadequate, and they lack initiative in the learning process. There is a correlation between the overall evaluation of task performance and the students’ level of understanding of the tasks. By implementing a teaching feedback model, students’ learning enthusiasm and the quality of their work improved, providing effective educational support for promoting sustainable development on campus. Overall, the awareness of using computers and other multimedia technologies among students is not strong, and their learning process is relatively closed, with insufficient enthusiasm and initiative. This model can achieve the acquisition, integration, and statistical analysis of teacher feedback information. The model can realize the acquisition, integration, and statistical analysis of teachers’ feedback information. This work hopes to use this learning and feedback mode in practical teaching to address specific problems in computer multimedia courses. Show more
Keywords: Highperformance computing (HPC), sustainable energy, multimedia technology, teaching feedback model, information gathering
DOI: 10.3233/IDT-240592
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: S, Haseena Beegum | R, Manju
Article Type: Research Article
Abstract: Electrocardiogram (ECG) signal plays an important role in monitoring and diagnosing patients who suffer from several cardiovascular diseases. Numerous conventional techniques designed for cardiovascular disease classification face challenges regarding classification accuracy and also, find difficulty in automatic monitoring and classification techniques. Therefore, this work aspires to design a robust approach, which can precisely classify the ECG even in the presence of noise. Following that, this research introduces the heartbeat classification scheme by utilizing the optimization-based deep learning scheme. Here, the optimization algorithm, called the Serial Exponential Hunger Games Search Algorithm (SExpHGS) is newly designed by integrating the serial exponential weighted …moving average concept in the Hunger Games Search (HGS) approach to train deep learning scheme. Initially, the pre-processing is performed by utilizing a median filter and subsequently, wave components are detected by utilizing the resolution wavelet-based scheme. Ultimately, SExpHGS-based Deep Belief Network (SExpHGS-based DBN) recognizes the ECG conditions of individuals. Here, the techniques are analyzed by utilizing the ECG Lead 2 Dataset PhysioNet dataset and analysis is carried out based on performance parameters, namely accuracy, specificity, and sensitivity. The attained values of the aforementioned metrics are 0.954, 0.965, and 0.938, correspondingly. Show more
Keywords: Cardiovascular disease, ECG signal, pre-processing, deep learning, optimization algorithm
DOI: 10.3233/IDT-230680
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-21, 2024
Authors: Katkar, Vaishali Abhijeet | Goswami, Prerna | Yashwante, Meghna Ranjit
Article Type: Research Article
Abstract: When approaching a large-scale performance, the choice, size, and administration of an Energy Storage System (ESS) for an Electric Vehicle (EV) are crucial. As the peak-to-average power demand ratio is relatively large, particularly for an urban ride that is frequently marked by rapid deceleration as well as acceleration, the complementary characteristics of the battery and Ultra-Capacitor (UC)render this arrangement a viable Hybrid energy storage system (HESS) for EV. Enhanced dynamic responsiveness, increased miles per charge, and extended battery life provided by the HESS increase the Electric Vehicle (EV’s) effectiveness. The primary objective of the study that has been suggested is …to create a smart Energy Management Strategy (EMS) for EVs. A battery package and a properly sized Ultra-Capacitor (UC) together give the required high power along with energy density. This research proposes a CNN-based power management technique to aid in efficient EMS. Additionally, by adjusting the Convolution Neural Network (CNN) classifier’s weights through Improved Honey Badger Optimization (IHBO), the adaptive approach of the Standard HBO Algorithm, the performance of the classifier is improved. In MATLAB, the suggested CNN-based model for BESS EM is simulated and experimental assessment and analysis are done in terms of converter current and battery SoC. By contrasting the suggested approach against several standardized models, the performance analysis of the proposed work is assessed to validate its performance. Show more
Keywords: Electric vehicle, energy management, convolutional neural network, IHBO, battery, UC
DOI: 10.3233/IDT-240479
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-19, 2024
Authors: Dalal, Kusum
Article Type: Research Article
Abstract: Mobile Ad hoc Networks (MANETs) pose significant routing challenges due to their decentralized and dynamic nature. Node mobility results in frequent changes in network topology, leading to unsFIG connectivity and link quality. Traditional RPs designed for static networks are inadequate for MANETs. To deal with these issues, a secure routing approach is proposed using the Red Panda-Lyrebrid Optimization (RePLO) algorithm, which combines the advantages of the Red Panda Optimization (RPO) and Lyrebird Optimization Algorithm (LOA) algorithms. The proposed approach consists of five steps: (i) configuring the system model, (ii) developing the energy model, (iii) creating the mobility model, (iv) selecting …cluster heads using the RePLO algorithm, and (v) routing using the RePLO algorithm. The RePLO algorithm optimizes cluster head selection and routing while considering specific constraints such as delay, distance, energy, & security for Cluster Head (CH) selection, and link quality and enhanced trust for routing optimization. The effectiveness of the proposed approach is evaluated using various metrics to demonstrate its efficiency in MANET routing. By integrating multiple optimization techniques and considering critical constraints, the RePLO algorithm offers a systematic and secure solution for MANET routing. The evaluation results confirm the efficacy of the proposed approach in improving network performance, reliability, and security. Overall, the RePLO algorithm presents a promising approach to tackle the routing issues inherent in MANETs, paving the way for more robust and efficient communication in mobile ad hoc networks. Show more
Keywords: MANET, Routing protocol, Cluster head selection, RePLO algorithm, ABP mechanism
DOI: 10.3233/IDT-240739
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-26, 2024
Authors: Zhang, Guangtao
Article Type: Research Article
Abstract: In order to solve the problem of low computing efficiency in big data analysis and model construction, this paper intended to deeply explore the big data analysis programming model, DAG (Directed Acyclic Graph) and other contents, and on this basis, it adopted a distributed matrix computing system Octopus for big data analysis. Octopus is a universal matrix programming framework that provides a programming model based on matrix operations, which can conveniently analyze and process large-scale data. By using Octopus, users can extract functions and data from multiple platforms and operate through a unified matrix operation interface. The distributed matrix representation …and storage layer can design data storage formats for distributed file systems. Each computing platform in OctMatrix provides its own matrix library, and it provides a matrix library written in R language for the above users. SymboMatrix provides a matrix interface to OctMatrix that is consistent with OctMatrix. However, SymboMatrix also retains the flow diagram for matrix operations in the process, and it also supports logical and physical optimization of the flow diagram on a DAG. For the DAG computational flow graph generated by SymbolMatrix, this paper divided it into two parts: logical optimization and physical optimization. This paper adopted a distributed file system based on line matrix, and obtained the corresponding platform matrix by reading the documents based on line matrix. In the evaluation of system performance, it was found that the distributed matrix computing system had a high computing efficiency, and the average CPU (central processing unit) usage reached 70%. This system can make full use of computing resources and realize efficient parallel computing. Show more
Keywords: Big data analysis, distributed matrix computing system, data management, matrix segmentation, historical data
DOI: 10.3233/IDT-230309
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Wang, Yunjun | Ren, Zhiyuan
Article Type: Research Article
Abstract: Traditional standing long jump measurement relies only on visual reading and manual recording, which makes the recording of data subjective and arbitrary, making it difficult to ensure the accuracy and efficiency of long jump performance. To address the shortcomings and deficiencies of traditional measurement methods and to avoid the interference of subjective bias on results, the research aims to provide a more accurate, automated, and objective measurement method. Furthermore, the research will provide new technological means for the measurement of related sports projects. In contrast to the utilization of human motion recognition technology, the study introduces image recognition technology into …the domain of standing long jump testing. This technology enables the calculation of distance through the application of image processing and perspective transformation algorithms, thereby facilitating the realization of a distance measurement function. Specifically, this includes using wavelet decomposition coefficients and morphological denoising to improve the performance of wavelet threshold denoising, achieving feature extraction of image edge information, adding vibration sensors and CNN algorithms to adjust the angle of offset images, and designing a multi-step long jump distance measurement system. The combination of wavelet decomposition coefficients and morphological denoising utilized in the study demonstrated lower mean square error (50.8369) and signal-to-noise ratio (24.1126) values, with a maximum accuracy of 96.23%, which was significantly higher than the other two comparison methods. In the context of different feature information recognition, the ROC curve area of the algorithm model proposed in the study reached over 85%, with a deviation in the dataset of all below 0.5. The minimum absolute and relative errors between the measurement results of this method and the actual test results were 0.01 cm and 2%, respectively. The overall deviation of the system was 0.35, indicating high stability. The proposed long jump measurement system has the potential to enhance the efficiency of testing for the standing long jump, while also forming a complementary mode with traditional distance measurement systems. This could collectively serve the intelligent instrument market, providing technical means for the development of sports teaching projects. Show more
Keywords: Image denoising, standing long jump, automatic distance measurement, CNN, binarization
DOI: 10.3233/IDT-230733
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Jain, Aryamaan | Mahawar, Priyanka | Pantola, Deepika | Gupta, Madhuri | Singh, Prabhishek | Diwakar, Manoj
Article Type: Research Article
Abstract: Recent research suggests that by 2023, the production of data will exceed 300 exabytes per month, a figure surpassing human verbal communication by over 60 times. This exponential growth underscores the need for platforms to adapt in areas such as data analysis and storage. Efficient data organization is crucial, considering the growing scarcity of time and space resources. While manual sorting may suffice for small datasets in smaller organizations, large corporations dealing with millions or billions of documents require advanced tools to streamline storage, sorting, and analysis processes. In response to this need, this research introduces a novel architecture called …Slick, designed to enhance sorting, filtering, organization, and analysis capabilities for any storage service. The proposed architecture incorporates two innovative techniques – Degree of Importance (DOI) and amortized clustering – along with established natural language processing methods such as Topic Modelling, Summarization, and Tonal Analysis. Additionally, a new methodology for keyword extraction and document grouping is presented, resulting in significantly improved response times. It offers a searchable platform where users can utilize succinct keywords, lengthy text passages, or complete documents to access the information they seek. Experimental findings demonstrate a nearly 46 percent reduction in average response time compared to existing methods in literature. Show more
Keywords: Keyword extraction, clustering, document retrieval engine, tonal analysis, summarization
DOI: 10.3233/IDT-230682
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Misra, Mohit | Sharma, Rohit | Tiwari, Shailesh
Article Type: Research Article
Abstract: The poor quality of asphalt roads has a significant impact on driver safety, damages the mechanical structure of vehicles, increases fuel consumption, annoys passengers and is sometimes also responsible for accidents. Further, the poor quality of the road can be described as a rough surface and the presence of potholes. The potholes can be one of the main reasons for accident cause, increased fuel consumption and annoying passengers. Furthermore, the potholes can be of varied size, radiance effect, shadow and scales. Hence, the detection of potholes in asphalt roads can be considered a complex task and one of the serious …issues regarding the maintenance of asphalt roads. This work focuses on the detection of the potholes in the asphalt roads. So in this work, a pothole detection model is proposed for accurate detection of potholes in the asphalt roads. The effectiveness of the proposed pothole detection model is tested over a set of real-world image datasets. In this study, the asphalt roads of the Delhi-NCR region are chosen and real-world images of these roads are collected through the smart camera. The final road image dataset consists of a total of 1150 images including 860 pothole images and the rest of are without pothole images. Further, the deep belief network is integrated into a proposed model for the detection of pothole images as a classification task and classified the images as pothole detected and not pothole. The experimental results of the proposed detection model are evaluated using accuracy, precision, recall, F1-Score and AUC parameters. These results are also compared with ANN, SVM, VGG16, VGG19 and InceptionV3 techniques. The simulation results showed that the proposed detection model achieves a 93.04% accuracy rate, 94.30% recall rate, 96.31% precision rate and 96.92% F1-Score rate than other techniques. Show more
Keywords: Asphalt road, potholes, deep belief network, detection model
DOI: 10.3233/IDT-240127
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Li, Zhiguang | Li, Baitao | Jahng, Surng Gahb | Jung, Changyong
Article Type: Research Article
Abstract: In order to improve the tracking effect of prevention and control, this paper combines three-dimensional face recognition technology and posture recognition method to build a prevention and control tracking system, and designs a single-stage human target detection and posture estimation network based on thermal infrared images. By using infrared recognition methods, the face and human body shape can be directly recognized, overcoming the problem of traditional visual recognition methods being easily occluded. The network can simultaneously complete two tasks of human target detection and pose estimation in a multi-task manner. Moreover, this paper uses the knowledge distillation strategy to train …a lightweight model to further reduce the amount of model parameters to improve the inference speed. In addition, this paper uses a single-stage human target detection and posture estimation network to judge the effect of prevention and control tracking. Through the prevention and control tracking effect test, it can be seen that the prevention and control tracking system based on 3D face recognition proposed in this paper can effectively improve the prevention and control tracking effect. Show more
Keywords: Three-dimensional, face recognition, prevention and control, tracking
DOI: 10.3233/IDT-240104
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Huang, Guomin
Article Type: Research Article
Abstract: With the maturity of digital technology, the effectiveness of information transmission has received extensive attention. Variable font design is regarded as an extension of graphic design in digital media art, and variable font design is studied from the perspective of information communication design. From the perspective of digital technology, based on the research theory of variable information communication design, combined with the specific methods of font design, the perception, cognitive methods, and aesthetic laws of aesthetic subject vision and psychology in the field of font variable design are discussed. Provide relevant theoretical support for information communication design activities with variable …fonts. Based on artificial intelligence technology, a new interactive art expression method is proposed. By analyzing the main expression scenarios of interactive art, namely user information communication scenarios, interactive art push scenarios, interactive art promotion scenarios and personal service scenarios, it provides a scenario application basis for interactive expression, calculate the space complexity, space complexity and resource complexity, according to the complexity calculation results, carry out interaction design from three aspects of visual elements, multimedia elements, and human-computer interaction elements, integrate the topology structure after the interaction is realized, and in the network topology structure. The proposed model consists mainly of visual design, basic design elements, basic visual design, complete design and solution generation. It realizes the expression of interactive art of multimedia elements in display and control. The experimental results show that the interactive element extraction speed of the visual communication expression method based on artificial intelligence technology is increased by 30%, and the flexibility is stronger. Show more
Keywords: Visual communication, artificial intelligence, digital technology, design content
DOI: 10.3233/IDT-240105
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Huang, Haifeng
Article Type: Research Article
Abstract: With the rapid development of computer technology, parameter adaptive control methods are becoming more and more widely used in nonlinear systems. However, there are still many problems with synchronous controllers with multiple inputs and a single output, uncertainty, and dynamic characteristics. This paper analyzed a synchronization control strategy of uncoupled nonlinear systems based on parameter dynamic factors to adjust the performance of the synchronization controller, and briefly introduced the manifestations of chaotic motion. The characteristics and differences of continuous feedback control methods and transmission and transfer control methods were pointed out. Simple, effective, stable, and feasible synchronous control was analyzed …using parameter-adaptive control theory. By analyzing the non-linear relationships between various models at different orders, the fuzzy distribution of the second-order mean and their independent and uncorrelated matrices were obtained, and their corresponding law formulas were established to solve the functional expression between the corresponding state variables and the dynamic characteristics of the system. The error risk test, computational complexity test, synchronization performance score test, and chaos system control effect score test were carried out on the control algorithms of traditional chaos system synchronization methods and chaos system synchronization methods based on parameter adaptive methods. Parameter adaptive methods were found to effectively reduce the error risk of high-performance control algorithms for synchronization of the unified chaos system. The complexity of the calculation process was simplified and the complexity score of the calculation process was reduced by 0.6. The application of parameter adaptive methods could effectively improve the synchronization performance of control algorithms, and the control effectiveness rating of control algorithms was improved. The experimental test results proved the effectiveness of control algorithms, which greatly enriched the field of modern control applications and also drove the vigorous development of nonlinear dynamics research, thus making significant progress in chaos application research. Show more
Keywords: High-performance control algorithms, unified chaos system, synchronous control, parametric adaptive methods
DOI: 10.3233/IDT-240178
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Zhou, Lingling | Li, Wanchun | Qiao, Yu | Zhang, Miaomiao | Qiu, Xiaoli
Article Type: Research Article
Abstract: The digital culture business benefits from quick technical updates, digital manufacturing, communication networks, and customized consumption, which promotes new supply and consumption. The innovation and development of the digital cultural industry cannot be separated from the physical industry. The efficient production of the physical industry can ensure the innovative development of digital cultural industry. Therefore, to improve the production efficiency of physical industry, this research proposes a temperature control model combining whale optimization algorithm and fuzzy proportional integral differential algorithm based on the traditional temperature control in ceramic firing and wine making process. Firstly, fuzzy control theory is used to …improve the proportional integral differential algorithm, and a fuzzy proportional integral differential model with adaptive adjustment function is established. On this basis, to quickly confirm the optimal parameters of the proportional integral differential model, the whale optimization algorithm is introduced to further optimize its weight factors. It was verified that the temperature control adjustment model combining whale optimization algorithm and fuzzy proportional integral differential model was 2.77 s, which was shortened by 1.11 s, 2.37 s and 3.13 s, respectively than the other three models. Therefore, the constructed temperature control model can effectively realize high-precision intelligent temperature control in industrial production, laying the cornerstone for promoting the innovative development of digital culture industry. The proposed model can improve the production efficiency of physical industries, promote digital transformation, and optimize the innovative development of digital cultural industries. Show more
Keywords: Cultural innovation, PID, fuzzy control, whale optimization, digital culture industry
DOI: 10.3233/IDT-240028
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Miao, Yaofeng | Zhou, Yuan
Article Type: Research Article
Abstract: Computers play a very important role in today’s life. A large amount of personal information exists in computers, so it is very important to protect the information security on computer networks. The development of information technology has brought a huge impact on human society, and the information security of its computer network has attracted more and more attention. Aiming at the needs of computer network information management in the age of big data, this paper has developed a special computer network information protection system to deal with various threats and constantly improve the protection system. Based on this, the concept …and advantages of the 5G network are first expounded, and the management technology of the 5G network is briefly analyzed. Then the information security problems of computer networks in the 5G era are considered, and effective solutions to the information security problems are proposed. Through research and calculation, the new computer network security system can improve network information security by 13.4%. Show more
Keywords: Computer network, network information security, big data algorithm, computer network information security, orthogonal frequency
DOI: 10.3233/IDT-240179
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-12, 2024
Authors: Yu*, Lin | Bai, Yujie
Article Type: Research Article
Abstract: The Network Security Monitoring System (NSMS) can use Big Data (BD) and K-means DT (K-means with distance threshold) algorithms to automatically learn and identify abnormal patterns in the network, improving the accuracy of network threat detection. In this article, KDD Cup 1999 and NSL KDD were selected as NSMS for dataset analysis. Preprocess the data; Extract statistical information, time series information, and traffic distribution characteristics. Value device DT further classifies regular attacks, remote location (R2L) attacks, and user to root (U2R) permission attacks. The experimental results show that the hybrid intrusion detection algorithm based on K-means DT achieves a network …attack detection accuracy of 99.2% and a network attack detection accuracy of 98.9% on the NSL-KDD dataset. Hybrid intrusion detection algorithms can effectively improve the accuracy of network intrusion detection (NID). The hybrid intrusion detection system proposed in this article performs well on different datasets and can effectively detect various types of network intrusion attacks, with better performance than other algorithms. The NSMS designed in this article can cope with constantly changing network threats. Show more
Keywords: Network security, monitoring systems, big data, K-means with distance threshold, network attacks
DOI: 10.3233/IDT-240185
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Xue, Jing
Article Type: Research Article
Abstract: Art is a symbol of people’s thoughts, and among many forms of artistic expression, literature is the most direct one, which can present art directly to people. How to correctly understand language materials in literature is crucial for understanding literary works and realizing their artistic value. Therefore, in order to strengthen the understanding of Korean literature and analyze its core ideas, this article utilizes modern computer technology and improved Term Frequency-Inverse Document Frequency (TF-IDF) algorithm to process the corpus of Korean literature, in order to quickly extract valuable textual information from Korean literature and facilitate reading and understanding. At the …same time, a Korean literature corpus processing model was constructed based on deep learning algorithms. This model is based on the Natural Language Processing (NLP) algorithm, selecting Word Frequency Inverse Document Frequency (TF-IDF) as the feature to calculate the feature weight of keywords. By weighting the naive Bayesian algorithm, it achieves the classification and processing of expected text data in Korean literature. The results of multiple experiments show that the classification accuracy of the model exceeds 97.7%, and the classification recall rate is as high as 94.2%, indicating that the model can effectively achieve corpus processing in Korean literature. Show more
Keywords: Korean literature, art, expect, TF-IDF algorithm, NLP, computer system
DOI: 10.3233/IDT-230772
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Ding, Dawei
Article Type: Research Article
Abstract: In view of the problems of low detection accuracy, long detection time, and inability to monitor fault data in real time in the fault detection of traditional machinery and equipment, this paper studies the identification and fault detection of industrial machinery based on the Internet of Things (IoT) technology. By using Internet of Things technology to build a mechanical equipment fault detection system, Internet of Things technology can better build diagnostic and early warning modules for the system, so as to achieve the goal of improving the accuracy of equipment fault detection, shortening equipment fault detection time, and remotely monitoring …equipment. The fault detection system studied in this paper has an accuracy rate of more than 93.4% to detect different types of fault. The use of Internet of Things technology is conducive to improving the accuracy of mechanical equipment fault detection and realizing real-time monitoring of equipment data. Show more
Keywords: Equipment fault detection, industrial machinery, Internet of Things technology, neural network algorithm, energy consumption
DOI: 10.3233/IDT-240177
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Sun, Wei | Zhang, Yu | Ma, Jin
Article Type: Research Article
Abstract: With the development of technology and the improvement of living standards, consumer demand for toy products has also changed. More people are showing strong interest and demand for digital toy products. However, in the current digital toy design process, extracting the hairstyle features of the doll’s head is still a challenge. Therefore, this study extracts the two-dimensional contour of the target hairstyle and matches it with the template hairstyle in the database. Then, combining hair texture information and hairstyle structural features, fine geometric texture details are generated on the hairstyle mesh surface. Finally, a doll head modeling method based on …model matching and texture generation is proposed. The results showed that the hairstyle of the Generative model was almost the same as the real hairstyle. Compared with the modeling methods based on interactive genetic algorithm and digital image, the average F1 value of this method was 0.95, and the mean absolute error was the smallest. The accuracy of the target model modeling was 95.4%, and the area enclosed by the receiver operating characteristic curve and coordinate axis was 0.965. In summary, the doll head modeling method based on model matching and texture generation proposed in this study can generate high-precision and realistic hairstyle models corresponding to the hairstyle. The overall shape and local geometric details of the hairstyle can meet the needs of 3D printing, providing certain reference significance for hairstyle reconstruction. Show more
Keywords: Model matching, geometric texture, hairstyle reconstruction, 3D printing, head modeling
DOI: 10.3233/IDT-240458
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Bahadure, Nilesh Bhaskarrao | Khomane, Ramdas | Raut, Deep | Bhagwatkar, Devanshu | Bakshi, Himanshu | Bawse, Priyanshu | Nagpal, Pari | Patil, Prasenjeet Damodar | Vishwakarma, Muktinath
Article Type: Research Article
Abstract: This study performed a comparative analysis of various imputations for NULL values in the dataset, namely, mean, median, and mode. We implemented eleven regression models, including Linear and Support Vector Regression and tree-based regression models, such as decision tree, Surrogate tree, and random forest, with five different pre-processing techniques, providing different types of results. The core objective of this study is to compare these results and reach an interpretation as to why certain imputation technique produces a certain output. The interpretation of this result is helpful in the selection of the regression model. The experimental results of the proposed technique …were evaluated and validated for the performance and quality analysis of life expectancy prediction using various quality parameters. Among the results, the highest accuracy was produced by random forest regression with an accuracy of 96.8%, which proves the significance of random forest in comparison to other state-of-the-art regression methods for life expectancy prediction. Show more
Keywords: Life expectancy, random forest, decision tree, surrogate tree, support vector regression, regression methods
DOI: 10.3233/IDT-240983
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]