Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 105.00Impact Factor 2023: 1.0
The field of intelligent decision technologies is interdisciplinary in nature, bridging computer science with its development of artificial intelligence, information systems with its development of decision support systems, and engineering with its development of systems. IDT seeks to attract research that is focused on applied technology while exploring its development at a fundamental level. IDT seeks an interchange of research on intelligent systems and intelligent technologies which enhance or improve decision-making in industry, government and academia. IDT publishes research on all aspects of intelligent decision technologies, from fundamental development to the applied system. The journal is concerned with theory, design, development, implementation, testing and evaluation of intelligent decision systems that have the potential to support decision making in the areas of management, international business, finance, accounting, marketing, healthcare, military applications, production, networks, traffic management, crisis response, human interfaces and other applied fields.
The target audience is researchers in computer science, business, commerce, health science, management, engineering and information systems that develop and apply intelligent decision technologies. Authors are invited to submit original unpublished work that is not under consideration for publication elsewhere.
Authors: Lin, Chunhua
Article Type: Research Article
Abstract: Deep learning (DL) is the basis of many applications of artificial intelligence (AI), and cloud service is the main way of modern computer capabilities. DL functions provided by cloud services have attracted great attention. At present, the application of AI in various fields of life is gradually playing an important role, and the demand and enthusiasm of governments at all levels for building AI computing capacity are also growing. The AI logic evaluation process is often based on complex algorithms that use or generate large amounts of data. Due to the higher requirements for the data processing and storage capacity …of the device itself, which are often not fully realized by humans because the current data processing technology and information storage technology are relatively backward, this has become an obstacle to the further development of AI cloud services. Therefore, this paper has studied the requirements and objectives of the cloud service system under AI by analyzing the operation characteristics, service mode and current situation of DL, constructed design principles according to its requirements, and finally designed and implemented a cloud service system, thereby improving the algorithm scheduling quality of the cloud service system. The data processing capacity, resource allocation capacity and security management capacity of the AI cloud service system were superior to the original cloud service system. Among them, the data processing capacity of AI cloud service system was 7.3% higher than the original cloud service system; the resource allocation capacity of AI cloud service system was 6.7% higher than the original cloud service system; the security management capacity of AI cloud service system was 8.9% higher than the original cloud service system. In conclusion, DL plays an important role in the construction of AI cloud service system. Show more
Keywords: Cloud service mode, deep learning, artificial intelligence, cloud service system construction
DOI: 10.3233/IDT-230150
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-12, 2023
Authors: Zhang, Zhenyuan
Article Type: Research Article
Abstract: With the rapid development of the economy, the demand for electric power is increasing, and the operation quality of the power system directly affects the quality of people’s production and life. The electric energy provided by the electric power system is the foundation of social operation. Through continuous optimization of the functions of the electric power system, the efficiency of social operation can be improved, and economic benefits can be continuously created, thereby promoting social progress and people’s quality of life. In the power system, the responsibility of the power distribution network (PDN) is to transmit electricity to all parts …of the country, and its transmission efficiency would directly affect the operational efficiency of the power system. PDN scheduling plays an important role in improving power supply reliability, optimizing resource allocation, reducing energy waste, and reducing environmental pollution. It is of great significance for promoting social and economic development and environmental protection. However, in the PDN scheduling, due to the inflexibility of the power system scheduling, it leads to the loss and waste of electric energy. Therefore, it is necessary to upgrade the operation of the PDN automatically and use automation technology to improve the operational efficiency and energy utilization rate of the power system. This article optimized the energy-saving management of PDN dispatching through electrical automation technology. The algorithm proposed in this paper was a distribution scheduling algorithm based on electrical automation technology. Through this algorithm, real-time monitoring, analysis, and scheduling of PDNs can be achieved, thereby improving the efficiency and reliability of distribution systems and reducing energy consumption. The experimental results showed that before using the distribution scheduling algorithm based on electrical automation technology, the high loss distribution to transformation ratios of power distribution stations in the first to fourth quarters were 21.93%, 22.95%, 23.61%, and 22.47%, respectively. After using the distribution scheduling algorithm, the high loss distribution to transformation ratios for the four quarters were 15.75%, 13.81%, 14.77%, and 13.12%, respectively. This showed that the algorithm can reduce the high loss distribution to transformation ratio of power distribution stations and reduce their distribution losses, which saved electric energy. The research results of this article indicated that electrical automation technology can play an excellent role in the field of PDN scheduling, which optimized the energy-saving management technology of PDN scheduling, indicating an advanced development direction for intelligent management of PDN scheduling. Show more
Keywords: Electrical automation, power distribution network scheduling, energy saving management, power energy, distribution dispatching algorithm
DOI: 10.3233/IDT-230121
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2023
Authors: Wu, Zongfu | Hou, Fazhong
Article Type: Research Article
Abstract: Due to the large scale and spatiotemporal dispersion of 3D (three-dimensional) point cloud data, current object recognition and semantic annotation methods still face issues of high computational complexity and slow data processing speed, resulting in data processing requiring much longer time than collection. This article studied the FPFH (Fast Point Feature Histograms) description method for local spatial features of point cloud data, achieving efficient extraction of local spatial features of point cloud data; This article investigated the robustness of point cloud data under different sample densities and noise environments. This article utilized the time delay of laser emission and reception …signals to achieve distance measurement. Based on this, the measured object is continuously scanned to obtain the distance between the measured object and the measurement point. This article referred to the existing three-dimensional coordinate conversion method to obtain a two-dimensional lattice after three-dimensional position conversion. Based on the basic requirements of point cloud data processing, this article adopted a modular approach, with core functional modules such as input and output of point cloud data, visualization of point clouds, filtering of point clouds, extraction of key points of point clouds, feature extraction of point clouds, registration of point clouds, and data acquisition of point clouds. This can achieve efficient and convenient human-computer interaction for point clouds. This article used a laser image recognition system to screen potential objects, with a success rate of 85% and an accuracy rate of 82%. The laser image recognition system based on spatiotemporal data used in this article has high accuracy. Show more
Keywords: Laser recognition, image system, spatiotemporal data, point cloud data
DOI: 10.3233/IDT-230161
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Wei, Bo | Chen, Huanying | Huang, Zhaoji
Article Type: Research Article
Abstract: In order to solve the problem of low accuracy of evaluation results caused by the impact of throughput and transmission delay on traditional systems in 6G networks, this paper proposes a design method of network security processing system in 5G/6gNG-DSS of intelligent model computer. Supported by the principle of active defense, this paper designs a server-side structure, using ScanHome SH-800/400 embedded scanning module barcode QR code scanning device as the scanning engine. We put an evaluation device on the RISC chip PA-RISC microprocessor. Once the system fails, it will send an early warning signal. Through setting control, data, and cooperation …interfaces, it can support the information exchange between subsystems. The higher pulse width modulator TL494:4 pin is used to design the power source. We use the top-down data management method to design the system software flow, build a mathematical model, introduce network entropy to weigh the benefits, and realize the system security evaluation. The experimental results show that the highest evaluation accuracy of the system can reach 98%, which can ensure user information security. Conclusion: The problem of active defense network security is transformed into a dynamic analysis problem, which provides an effective decision-making scheme for managers. The system evaluation based on Packet Tracer software has high accuracy and provides important decisions for network security analysis. Show more
Keywords: Active defense, network security assessment, packet tracer, scanning engine, software flow design, mathematical model
DOI: 10.3233/IDT-230143
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2023
Authors: Niu, Meng
Article Type: Research Article
Abstract: Electrical device automation in smart industries assimilates machines, electronic circuits, and control systems for efficient operations. The automated controls provide human intervention and fewer operations through proportional-integral-derivative (PID) controllers. Considering these devices’ operational and control loop contributions, this article introduces an Override-Controlled Definitive Performance Scheme (OCDPS). This scheme focuses on confining machine operations within the allocated time intervals preventing loop failures. The control value for multiple electrical machines is estimated based on the operational load and time for preventing failures. The override cases use predictive learning that incorporates the previous operational logs. Considering the override prediction, the control value is …adjusted independently for different devices for confining variation loops. The automation features are programmed as before and after loop failures to cease further operational overrides in this process. Predictive learning independently identifies the possibilities in override and machine failures for increasing efficacy. The proposed method is contrasted with previously established models including the ILC, ASLP, and TD3. This evaluation considers the parameters of uptime, errors, override time, productivity, and prediction accuracy. Loops in operations and typical running times are two examples of the variables. The learning process results are utilized to estimate efficiency by modifying the operating time and loop consistencies with the help of control values. To avoid unscheduled downtime, the discovered loop failures modify the control parameters of individual machine processes. Show more
Keywords: Control loop, electrical automation, override control, PID, predictive learning
DOI: 10.3233/IDT-230125
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2023
Authors: Xiao, Jian
Article Type: Research Article
Abstract: Machine learning algorithms have been widely used in risk prediction management systems for financial data. Early warning and control of financial risks are important areas of corporate investment decision-making, which can effectively reduce investment risks and ensure companies’ stable development. With the development of the Internet of Things, enterprises’ financial information is obtained through various intelligent devices in the enterprise financial system. Big data provides high-quality services for the economy and society in the high-tech era of information. However, the amount of financial data is large, complex and variable, so the analysis of financial data has huge difficulties, and with …the in-depth application of machine learning algorithms, its shortcomings are gradually exposed. To this end, this paper collects the financial data of a listed group from 2005 to 2020, and conducts data preprocessing and Feature selection, including removing missing values, Outlier and unrelated items. Next, these data are divided into a training set and a testing set, where the training set data is used for model training and the testing set data is used to evaluate the performance of the model. Three methods are used to build and compare data control models, which are based on machine learning algorithm, based on deep learning network and the model based on artificial intelligence and Big data technology proposed in this paper. In terms of risk event prediction comparison, this paper selects two indicators to measure the performance of the model: accuracy and Mean squared error (MSE). Accuracy reflects the predictive ability of the model, which is the proportion of all correctly predicted samples to the total sample size. Mean squared error is used to evaluate the accuracy and error of the model, that is, the square of the Average absolute deviation between the predicted value and the true value. In this paper, the prediction results of the three methods are compared with the actual values, and their accuracy and Mean squared error are obtained and compared. The experimental results show that the model based on artificial intelligence and Big data technology proposed in this paper has higher accuracy and smaller Mean squared error than the other two models, and can achieve 90% accuracy in risk event prediction, which proves that it has higher ability in controlling financial data risk. Show more
Keywords: Machine learning algorithm, financial risk control, big data control, cyborg sensation, Internet of Things
DOI: 10.3233/IDT-230156
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Zhao, Lihui
Article Type: Research Article
Abstract: Due to the lack of data security protection, a large number of malicious information leaks, which makes building information security (InfoSec) issues more and more attention. The construction information involves a large number of participants, and the number of construction project files is huge, leading to a huge amount of information. However, traditional network security information protection software is mostly passive, which is difficult to enhance its autonomy. Therefore, this text introduced data sharing algorithm in building InfoSec management. This text proposed an Attribute Based Encryption (ABE) algorithm based on data sharing, which is simple in calculation and strong in …encrypting attributes. This algorithm was added to the building InfoSec management system (ISMS) designed in this text, which not only reduces the burden of relevant personnel, but also has flexible control and high security. The experimental results showed that when 10 users logged in to the system, the stability and security of the system designed in this text were 87% and 91% respectively. When 20 users logged in to the system, the system stability and security designed in this text were 89% and 92% respectively. When 80 users logged in to the system, the system stability and security designed in this text were 94% and 95% respectively. It can be found that the stability and security of the system have reached a high level, which can ensure the security of effective management of building information. Show more
Keywords: Data sharing, attribute based encryption, construction information, security management system
DOI: 10.3233/IDT-230144
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Chen, Huanying | Wei, Bo | Huang, Zhaoji
Article Type: Research Article
Abstract: In the age of big data, electronic data has developed rapidly and gradually replaced traditional paper documents. In daily life, all kinds of data are saved in the form of electronic documents. In this regard, people have strengthened the development of electronic depository system. Electronic storage refers to the storage of actual events in the form of electronic data through information technology to prove the time and content of events. Its application scenarios are very extensive such as electronic contracts, online transactions and intellectual property rights. However, due to the vulnerability of electronic data, the existing electronic data depository system …has certain security risks, and its content is very easy to be tampered with and destroyed, resulting in the loss of depository information. Due to the complexity of the operation of the existing electronic data depository system, some users are likely to reduce the authenticity of the depository information due to the non-standard operation. In order to solve the problems existing in the current electronic data storage system, this paper designed an electronic data storage system based on cloud computing and blockchain technology. The data storage of cloud computing and blockchain was decentralized, and its content cannot be tampered with. It can effectively ensure the integrity and security of electronic information, which is more suitable for the needs of electronic storage scenarios. This paper first introduced the development of electronic data depository system and cloud computing, and optimized the electronic data depository system through the task scheduling model of cloud computing. Finally, the feasibility of the system was verified through experiments. The data showed that the functional efficiency of the system in the electronic data sampling point storage function, the upload of documents to be stored, the download of stored documents, the view of stored information function and the file storage and certificate comparison verification function has reached 0.843, 0.821, 0.798, 0.862 and 0.812 respectively. The final function indexes of each function of the traditional electronic data depository system were 0.619, 0.594, 0.618, 0.597 and 0.622 respectively. This data shows that the electronic data storage system based on cloud computing and blockchain modeling can effectively manage electronic data and facilitate relevant personnel to verify electronic data. Show more
Keywords: Electronic data, deposit system, cloud computing, blockchain technology, task scheduling model
DOI: 10.3233/IDT-230152
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Zhang, Lan | Luo, Yanling | Zhao, Yue
Article Type: Research Article
Abstract: The rapid development of mobile communication technology not only brings great convenience to users, but also brings the risk of user data privacy leakage. Due to the broadcasting nature of mobile communication transmission, open wireless interfaces have become a security vulnerability for mobile devices such as mobile phones, which can be easily eavesdropped. This article studied a data privacy protection algorithm suitable for mobile communication in cellular networks based on the anonymous mechanism of blockchain. This article first analyzes the overall framework and anonymity technology of blockchain data from the perspective of privacy and queryability. Then, based on blockchain technology, …consistency mechanisms, privacy control, and access rights management are designed. Finally, mobile communication data privacy protection algorithms are designed and implemented through data encryption and verification. The latency of transactions under different task volumes and throughput at different nodes were analyzed to verify the reliability of several anonymous mechanisms in cellular network mobile communication data privacy protection in blockchain. Based on the experimental results, it was concluded that the reliability range of CryptoNote and the zero coin and zero currency protocol mechanism in the specified nodes was between 92 and 99%. This article utilized blockchain technology to distribute mobile communication data in nodes and achieve decentralized data transmission, thus protecting the privacy of wireless network communication data. By analyzing the reliability of anonymous mechanisms based on blockchain in mobile communication nodes, it was concluded that this method had certain research value. Show more
Keywords: Cellular networks, mobile communications, data privacy, blockchain technology, privacy protection, byzantine algorithm
DOI: 10.3233/IDT-230233
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Lei, Gang | Wu, Junyi | Gu, Keyang | Jiang, Fan | Li, Shibin | Jiang, Changgen
Article Type: Research Article
Abstract: In the era of rapid development of modern internet technology, network transmission techniques are continuously iterating and updating. The Quick UDP Internet Connections (QUIC) protocol has emerged as a timely response to these advancements. Owing to the strong compatibility and high transmission speed of QUIC, its extended version, Multipath QUIC (MPQUIC), has gained popularity. MPQUIC can integrate various transmission scenarios, achieving parallel transmission with higher bandwidth. However, due to some security flaws in the protocol, MPQUIC is susceptible to attacks from anomalous network traffic. To address this issue, we propose an MPQUIC traffic anomaly detection model based on Empirical Mode …Decomposition (EMD) and Long Short-Term Memory (LSTM) networks, which can decompose and denoise data and learn the long-term dependencies of the data. Simulation experiments are conducted by obtaining MPQUIC traffic data under normal and anomalous conditions for prediction, analysis, and evaluation. The results demonstrate that the proposed model exhibits satisfactory prediction performance when trained on both normal and anomalous traffic data, enabling anomaly detection. Moreover, the evaluation metrics indicate that the EMD-LSTM-based model achieves higher accuracy compared to various traditional single models. Show more
Keywords: Multipath QUIC, network traffic, anomalous detection model, empirical mode decomposition, long short-term memory
DOI: 10.3233/IDT-230261
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-22, 2024
Authors: Zou, Wenjing | Xu, Huan | Yang, Qiuyong | Dong, Can | Su, Wenwei
Article Type: Research Article
Abstract: Currently, the data management of power enterprises faces the need to analyze data sources from multiple places. However, traditional multi-source data fabric systems have problems such as low analysis efficiency and high error rates, which brings great inconvenience to the data analysis of power enterprises. In order to improve the accuracy and efficiency of data analysis in data structure systems, the intelligent system architecture is applied to the construction of source data structure systems. The main modules are data collection, data matching, data integration, and data analysis. This article uses simulated annealing genetic algorithm to perform high-performance calculations on system …timing data, thus achieving data matching. This article conducted data level data integration, feature level data integration, and decision level data integration. The access survey method was used to analyze the current data management problems faced by power companies. The evaluation and analysis of general multi-source data fabric systems and multi-source data fabric systems based on intelligent system architecture were conducted using the evaluation panel evaluation method. The analysis results showed that the operational convenience of the multi-source data fabric system based on intelligent system architecture could reach 60%–80%, which greatly improved compared to general multi-source data fabric systems; the information sharing of multi-source data fabric systems based on intelligent system architecture was greatly improved; the data processing efficiency of general multi-source data fabric systems was much lower than that of multi-source data fabric systems based on intelligent system architecture; however, the symmetry of data collection and matching in the multi-source data fabric system based on intelligent system architecture was slightly insufficient, and further improvement was still needed. In order to benefit more power companies through the intelligent system architecture based multi-source data fabric system, it was necessary to strengthen the management of data collection and matching symmetry. Show more
Keywords: Data fabric, intelligent system architecture, multi-source data, electricity companies, simulated annealing genetic algorithm
DOI: 10.3233/IDT-230240
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Liu, Bing | Li, Xianzhong | Li, Zheng | He, Peidong
Article Type: Research Article
Abstract: With the increasing Power Load (PL), the operation of the power system is facing increasingly severe challenges. PL control is an important means to ensure the stability of power system operation and power supply quality. However, traditional PL control methods have limitations and cannot meet the requirements of load control in the new era of power systems. This is because with the development of modern industry and commerce, the demand for electricity is gradually increasing. This article constructed a PL control and management terminal operating system based on machine learning technology to achieve intelligent management of PL, so as to …improve the operational efficiency and power supply quality of the power system. This article identified the design concept of a PL control management terminal operating system based on machine learning technology by reviewing the current research status of PL control technology. Based on the operational characteristics and data characteristics of the power system, this article selected suitable machine learning algorithms to process and analyze load data, and established a prototype of a PL control and management terminal operating system based on machine learning technology, so as to realize intelligent processing and analysis of load data and conduct experimental verification. The experimental results show that through the comparative study of 6 sets of data in the tertiary level, the difference between the system and the real tertiary level is 0.079 kw, 0.005 kw and 0.189 kw respectively. Therefore, therefore, the average difference between the predicted value and the measured value of the PL system is about 0.091 kw. This indicated that the system had high accuracy and real-time performance in predicting PL, which could effectively improve the load control efficiency and power supply quality of the power system. The PL control management terminal operating system based on machine learning technology constructed in this article provided new ideas and methods for the development of PL control technology. In the future, system algorithms can be further optimized and a more intelligent PL control and management terminal operating system can be constructed to cope with the growing PL and increasingly complex power system operating environment. Show more
Keywords: Machine learning technology, power load, management terminal, operation systemï¼ power load forecasting model
DOI: 10.3233/IDT-230239
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Bäßmann, Felix N. | Lier, Sarah K. | Werth, Oliver | Krage, Astrid | Breitner, Michael H.
Article Type: Research Article
Abstract: Today, access to real-time information from emergency scenes is still limited for emergency medical services, fire departments, and their professionals, also called first responders. Emergency Response Information Systems (ERISs) have recently been discussed in the literature as a potential solution to this problem. Using the Design Science Research (DSR) paradigm, we present a novel 5G-enabled ERIS (5G-ERIS) design that leverages 5G mobile network technologies to offer diverse real-time information. We provide a user-centered examination of design specifications for a 5G-ERIS based on a smart city digital twin. Based on literature and qualitative expert interviews with several first responders in Germany, …we derive how emergency medical services and fire departments can improve their decision-making with this 5G-ERIS. Based on existing 5G application architectures, we structure our identified design specifications into four system layers. Our findings provide an essential knowledge base for the successful development, deployment, and long-term use of 5G-ERISs. We stimulate a broader discussion on the design objectives and specifications of 5G-ERISs in theory and practice. Show more
Keywords: 5G, design science research, ERIS, emergency, digital twin
DOI: 10.3233/IDT-230476
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-23, 2024
Authors: Arif, Muhammad Usman | Haider, Sajjad
Article Type: Research Article
Abstract: Multi-Robot Task Allocation (MRTA) is a complex problem domain with the majority of problem representations categorized as NP-hard. Existing solution approaches handling dynamic MRTA scenarios do not consider the problem structure changes as a possible system dynamic. RoSTAM (Robust and Self-adaptive Task Allocation for Multi-robot teams) presents a novel approach to handle a variety of MRTA problem representations without any alterations to the task allocation framework. RoSTAM’s capabilities against a range of MRTA problem distributions have already been established. This paper further validates RoSTAM’s performance against the more conventional dynamics, such as robot failure and new task arrival, while performing …allocations against two of the most frequently faced problem representations. The framework’s performance is evaluated against a state-of-the-art online auction scheme. The results validate RoSTAM’s capability to allocate tasks across a range of dynamics efficiently. Show more
Keywords: Computational intelligence, evolutionary algorithm, multi-robot task allocation, scheduling, multi-robot systems, multi-agent systems
DOI: 10.3233/IDT-230693
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-24, 2024
Authors: A, Bhagya Lakshmi | K, Sasirekha | S, Nagendiran | R, Ani Minisha | C, Mary Shiba | C.M, Varun | L.P, Sajitha | C, Vimala Josphine
Article Type: Research Article
Abstract: Generally, Wireless Body Area Networks (WBANs) are regarded as the collection of small sensor devices that are effectively implanted or embedded into the human body. Moreover, the nodes included in the WBAN have large resource constraints. Hence, reliable and energy-efficient data transmission plays a significant role in the implementation and in constructing of most of the merging applications. Regarded to complicated channel environment, limited power supply, as well as varying link connectivity has made the construction of WBANs routing protocol become difficult. In order to provide the routing protocol in a high energy-efficient manner, a new approach is suggested using …hybrid meta-heuristic development. Initially, all the sensor nodes in WBAN are considered for experimentation. In general, the WBAN is comprised of mobile nodes as well as fixed sensor nodes. Since the existing models are ineffective to achieve high energy efficiency, the new routing protocol is developed by proposing the Hybrid Tunicate-Whale Swarm Optimization (HT-WSO) algorithm. Subsequently, the proposed work considers the multiple constraints for deriving the objective function. The network efficiency is analyzed using the objective function that is formulated by distance, hop count, energy, path loss, and load and packet loss ratio. To attain the optimum value, the HT-WSO derived from Tunicate Swarm Algorithm (TSA) and Whale Optimization Algorithm (WOA) is employed. In the end, the ability of the working model is estimated by diverse parameters and compared with existing traditional approaches. The simulation outcome of the designed method achieves 13.3%, 23.5%, 25.7%, and 27.7% improved performance than DHOA, Jaya, TSA, and WOA. Thus, the results illustrate that the recommended protocol attains better energy efficiency over WBANs. Show more
Keywords: Energy efficiency, wireless body area network, routing protocol, hybrid tunicate-whale swarm optimization, distance, hop count, energy, path loss, load, packet loss ratio
DOI: 10.3233/IDT-220295
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-24, 2023
Authors: Zhang, Xuxia | Chen, Weijie | Wang, Jian | Fang, Rang
Article Type: Research Article
Abstract: With the rapid development of information technology and the rapid popularization of the Internet, while people enjoy the convenience and efficiency brought about by new technologies, they are also suffering from the harm caused by cyber attacks. In addition to efficiently thwarting network assaults, a high volume of complicated security event data might unintentionally increase the strain of policy makers. At present, NS threats mainly include network viruses, trojans, DOS (Denial-Of-Service), etc. For the increasingly complex Network Security (NS) problems, the traditional rule-based network monitoring technology is difficult to predict the unknown attack behavior. Environment-based, dynamic and integrated data fusion …can integrate data from a macro perspective. In recent years, Machine Learning (ML) technology has developed rapidly, which could easily train, test and predict existing third-party models. It uses ML algorithms to find out the association between data rather than manually sets rules. Support vector machine is a common ML method, which can predict the security of the network well after training and testing. In order to monitor the overall security status of the entire network, NS situation awareness refers to the real-time and accurate reproduction of network attacks using the reconstruction approach. Situation awareness technology is a powerful network monitoring and security technology, but there are many problems in the existing NS technology. For example, the state of the network cannot be accurately detected, and its change rule cannot be understood. In order to effectively predict network attacks, this paper adopted a technology based on ML and data analysis, and constructed a NS situational awareness model. The results showed that the detection efficiency of the model based on ML and data analysis was 7.18% higher than that of the traditional NS state awareness model. Show more
Keywords: Network security, machine learning algorithm, situation awareness, data analysis
DOI: 10.3233/IDT-230238
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-13, 2023
Authors: Chen, Haifeng
Article Type: Research Article
Abstract: With the rapid development of the economy, the power supply has also shown an increasing trend year by year, and many loopholes and hidden dangers have emerged during the operation of the power grid. The power grid may be subject to malicious attacks, such as hacker attacks, power theft, etc. This may lead to security risks such as power grid system paralysis and information leakage. In order to ensure the quality of power supply, it is necessary to optimize the distribution of electricity and improve power supply efficiency. This article pointed out the security performance issues of power Internet of …Things (IoT) terminals and analyzed the design and implementation of a vulnerability mining system for power IoT terminals based on a fuzzy mathematical model simulation platform. This article used a fuzzy mathematical model to quantitatively evaluate the security performance of power IoT terminals, providing an effective theoretical basis for vulnerability mining. Based on the analysis of vulnerability mining technology classification and vulnerability attack process, this article characterizes vulnerability parameters through fuzzy mapping. Based on the collected vulnerability data and the online and device status of power IoT terminals, fuzzy logic inference is used to determine and mine potential vulnerability issues in power IoT terminals. This article aimed to improve the security performance of power IoT terminals and ensure the safe and stable operation of the power system. By testing the number of system vulnerabilities, vulnerability risk level, and vulnerability mining time of the power IoT terminal vulnerability mining system based on fuzzy mathematical models, it was found that the power IoT simulation platform based on fuzzy mathematical models has fewer terminal vulnerabilities. The fuzzy mathematical model can reduce the vulnerability risk level of the power IoT simulation platform system, and the time required for vulnerability mining was reduced; the time was reduced by 0.48 seconds, and the speed of vulnerability mining was improved. Fuzzy mathematical models can promote the development of the power industry, which provides strong support for the security protection of power IoT terminals. Show more
Keywords: Power simulation platform, Internet of Things terminals, vulnerability mining systems, fuzzy mathematical model, fuzz test, fuzzy logic
DOI: 10.3233/IDT-230241
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-11, 2023
Authors: Ding, Haijuan | Zhao, Chengtao | Zhao, Debiao | Dong, Hairong
Article Type: Research Article
Abstract: An electrical Discharge Machine (EDM) is an effective spark machine that passes the sparks to get the desired shape by performing metal fabrication. In the EDM process, materials are removed between the two electrodes by transferring the workpiece’s high electric voltage and dielectric liquid. The voltage between the two electrodes is gradually increased to break down the dielectrics and remove the materials from the surface. EDM utilizes several parameters to regularize the material removal rate during this process. EDM changes its parameter for every experiment to minimize the Average Tool Wear Rate (ATWR). The existing techniques utilize a neural model …to optimize the EDM parameter. However, the traditional approaches fail to perform the fine-tuning process that affects the Material Removal Rate (MRR) and ATWR. Therefore, meta-heuristics optimization techniques are incorporated with the neural model to enhance the EDM parameter optimization process. In this work, the EDM experimental data have been collected and processed by the learning process to create the training pattern. Then, the test data is investigated using the Backpropagation Neural Model (BPM) to propagate the neural parameters. The BPM model is integrated with the Butterfly Optimization Algorithm (BOA) to select the search space’s global parameters. This analysis clearly shows an 8.93% maximum prediction rate, 0.023 minimum prediction rate and 2.83% mean prediction rate while investigating the different testing patterns compared to other methods. Show more
Keywords: Electrical Discharging Machine (EDM), Material Removal Rate (MRR), Unsupervised Pre-trained Neural Model (UPNM), Backpropagation Neural Model (BPM), Butterfly Optimization Algorithm (BOA)
DOI: 10.3233/IDT-230157
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2023
Authors: Wang, Jun
Article Type: Research Article
Abstract: From the perspective of practical development, under the premise of stable macroeconomic growth in society, influenced by spatiotemporal factors, regional economies inevitably have differences and changes, which affect various aspects of social production and life. In order to understand the spatiotemporal data evolution characteristics of regional economy, promote common regional development and the implementation of coordinated economic development strategies, this article takes the Beijing Tianjin Hebei (BTH for short here) region as an example. By combining spatial econometric models (SEM for short here), this article collects and processes economic development data from 2013 to 2022 in the BTH region, and …introduced a spatial weight matrix to conduct High-performance computing and analysis of its regional economic spatial correlation. Based on this, this article conducted in-depth research on the spatiotemporal data evolution characteristics of the BTH regional economy through the description and quantitative analysis of the influencing factors of the BTH regional economy. The empirical analysis results showed that the global Moran index (Global Moran’s for short here) of the BTH region was positive from 2013 to 2022, and the Z -values were all greater than 1.96, indicating a significant spatial correlation in the BTH regional economy. There is an imbalance in economic development in the BTH region, but with the continuous development of the region, its economic balance has improved. Show more
Keywords: Regional economies, spatial econometrics, evolution of spatiotemporal data, BTH region
DOI: 10.3233/IDT-230169
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2023
Authors: Sanjay, V. | Swarnalatha, P.
Article Type: Research Article
Abstract: Alzheimer’s disease (AD) is a neurodegenerative disorder that affects millions of individuals worldwide, causing progressive cognitive decline. Early prediction and diagnosis the AD accurately is crucial for effective intervention and treatment. In this study, we propose a comprehensive framework for AD prediction using various techniques, including preprocessing and denoising with Multilayer Perceptron (MLP) and Ant Colony Optimization (ACO), segmentation using U-Net, and classification with Spatial Pyramid Pooling Network (SPPNet). Furthermore, we employ Convolutional Neural Network (CNN) with SPPNet for training and develop a chatbot for recommendation based on MRI data input. The preprocessing and denoising techniques play a vital role …in enhancing the quality of the input data. MLP is utilized for preprocessing, where it effectively handles feature extraction and noise reduction. ACO is employed for denoising, optimizing the data to improve the signal-to-noise ratio, and enhancing the overall performance of subsequent stages. For accurate segmentation of brain regions, we employ the U-Net architecture, which has shown remarkable success in medical image segmentation tasks. U-Net effectively identifies the regions of interest, aiding in subsequent classification stages. The classification phase utilizes SPPNet, a deep learning model known for its ability to capture spatial information at multiple scales. SPPNet extracts features from segmented brain regions, enabling robust classification of AD and non-AD cases. To enhance the training process, we employ CNN with SPPNet, leveraging the power of convolutional layers to capture intricate patterns and improve predictive accuracy. The CNN-SPPNet model is trained on a large dataset of MRI scans, enabling it to learn complex representations and make accurate predictions. Hence the proposed work can be integrated with a chatbot that takes MRI data as input and provides recommendations based on the predicted AD probability. Experimental evaluation shows that the combination of preprocessing, denoising, segmentation, and classification offers a comprehensive solution for accurate and efficient AD diagnosis and management. Show more
Keywords: Alzheimer’s disease, early detection, learning algorithms, recommendation model, neurodegenerative disorder and MRI-based diagnosis
DOI: 10.3233/IDT-230635
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Li, Guozhang | Xing, Kongduo | Alfred, Rayner | Wang, Yetong
Article Type: Research Article
Abstract: With the passage of time, the importance of spatio-temporal data (STD) is increasing day by day, but the spatiotemporal characteristics of STD bring huge challenges to data processing. Aiming at the problems of image information loss, limited compression ratio, slow compression speed and low compression efficiency, this method based on image compression. This article intended to focus on aircraft trajectory data, meteorological data, and remote sensing image data as the main research objects. The research results would provide more accurate and effective data support for research in related fields. The image compaction algorithm based on deep learning in this article …consisted of two parts: encoder and decoder, and this method was compared with the JPEG (Joint Photographic Experts Group) method. When compressing meteorological data, the algorithm proposed in this paper can achieve a maximum compaction rate of 0.400, while the maximum compaction rate of the JPEG compaction algorithm was only 0.322. If a set of aircraft trajectory data containing 100 data points is compressed to 2:1, the storage space required for the algorithm in this paper is 4.2 MB, while the storage space required for the lossless compression algorithm is 5.6 MB, which increases the compression space by 33.33%. This article adopted an image compaction algorithm based on deep learning and data preprocessing, which can significantly improve the speed and quality of image compaction while maintaining the same compaction rate, and effectively compress spatial and temporal dimensional data. Show more
Keywords: STD processing, picture data compaction, high-performance image compaction algorithms, compaction rate, JPEG compaction algorithm
DOI: 10.3233/IDT-230234
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-15, 2024
Authors: Swain, Debabrata | Mehta, Utsav | Mehta, Meet | Vekariya, Jay | Swain, Debabala | Gerogiannis, Vassilis C. | Kanavos, Andreas | Acharya, Biswaranjan
Article Type: Research Article
Abstract: Erythemato-squamous Diseases (ESD) encompass a group of common skin conditions, including psoriasis, seborrheic dermatitis, lichen planus, pityriasis rosea, chronic dermatitis, and pityriasis rubra pilaris. These dermatological conditions affect a significant portion of the population and present a current challenge for accurate diagnosis and classification. Traditional classification methods struggle due to shared characteristics among these diseases. Machine Learning offers a valuable tool for aiding clinical decision-making in ESD classification. In this study, we leverage the UC Irvine (UCI) dermatology dataset by applying necessary preprocessing steps to handle missing data. We conduct a comparative analysis of two feature selection methods: One-way ANOVA …and Chi-square test. To enhance the model’s performance, we employ hyper-parameter tuning through GridSearchCV. The training process encompasses various algorithms, including Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbors (kNN), and Decision Trees. The culmination of our work is a hybrid ensemble machine learning model that combines the strengths of the trained classifiers. This ensemble classifier achieves an impressive accuracy of 98.9% when validated using a 10-fold cross-validation approach. Show more
Keywords: Differential diagnosis, erythemato-squamous diseases (ESD), ensemble machine learning, Support Vector Machine (SVM), decision tree, k-Nearest Neighbors (kNN), logistic regression, feature selection, classification
DOI: 10.3233/IDT-230779
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Mago, Neeru | Mittal, Mamta | Hemanth, D. Jude | Sharma, Rakhee
Article Type: Research Article
Abstract: The world’s population is growing at an exponential rate, which has created a slew of new issues in our daily lives. Among these challenges is the difficulty of finding available parking spaces. To overcome this obstacle, an effective system for detecting vacant parking spaces is crucial, particularly in densely populated areas. Even though there are numerous parking detecting systems on the market today, they all have their own set of restrictions. There are primarily two types of parking systems available. The first is a hardware-based system that uses hardware to detect whether a space is vacant or occupied, and the …second is a smart parking system that uses cameras to classify parking status. Vision-based parking systems are becoming increasingly popular due to benefits such as flexibility and cost efficiency. However, due to challenging weather conditions and various occlusions caused by objects, such systems are challenging to develop. The limitations of a vision-based parking system must be addressed for it to become more accurate. To date, many researchers have attempted to incorporate deep learning-based parking systems with various algorithms but none of them has focused on reducing the complexity of Convolutional Neural Network (CNN) training through unsupervised pre-processing steps. In this study, an attempt is made to reduce the complexity of CNN by integrating salient features as a pre-processing step for normalizing weather conditions. It has been observed that salient features-based pre-processing makes the system more robust in terms of accuracy and time efficiency. Show more
Keywords: Deep Learning, parking status, pklot dataset, CNRPark-EXT cataset, relative enhancement (RE), gradient dependent saliency (GDS)
DOI: 10.3233/IDT-230573
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Dong, Miao | Jiang, Weichang
Article Type: Research Article
Abstract: Tourism travel is a prevalent form of leisure and entertainment. This paper provides a brief overview of an attraction route planning algorithm based on multi-source data, which combines multi-source data to assess the effectiveness of the planned route and utilizes the particle swarm optimization (PSO) algorithm for path optimization. The genetic algorithm (GA) operations were incorporated with the PSO algorithm to enhance optimization performance. Subsequently, simulation experiments were conducted to compare the GA-PSO algorithm with using the PSO algorithm and GA. Moreover, a comparative analysis was performed on the performance of the path planning algorithm using single-source and multi-source data. …The results demonstrated that the GA-PSO algorithm exhibited the fastest convergence in optimization search and achieved the best fitness value at stabilization. Among the three path schemes, the GA-PSO algorithm performed the best, followed by the GA, while the PSO algorithm was found to be the least optimal. Furthermore, path planning with multi-source data demonstrated better alignment with tourists’ landscape preferences, enabling personalized routes construction. Show more
Keywords: Multi-source data, tourism, route planning, particle swarm optimization
DOI: 10.3233/IDT-230741
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-8, 2024
Authors: Aziz, Benjamin | Bukhelli, Aysha
Article Type: Research Article
Abstract: We identify a list of characteristics of a text document that we use as the basis for a study on human perception of what the structure of a document should look like. Our study reveals that the native language dimension has a significant impact on the human perception of some such characteristics, including the average number of paragraphs starting with a verb, as well as other characteristics describing average paragraph proportions. The study results also show what the mean values are, for the different document characteristics, and their distribution across the native language dimension, therefore providing some idea of what …a normalised structure of a document should look like. The results of the study have a direct application in a new method for embedding secret messages in text documents that has been recently proposed by the authors and which uses manipulations in the paragraph layout of a document. Show more
Keywords: Document structures, human perception, paragraph layout
DOI: 10.3233/IDT-240094
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-21, 2024
Authors: Huang, Xiaoyan
Article Type: Research Article
Abstract: This paper provides a concise overview of a grammatical error correction algorithm based on the encoder-decoder structure. The traditional unidirectional long short-term memory (LSTM) in the encoder was transformed into a bidirectional LSTM. Subsequently, the grammatical error correction algorithm was simulated and experimented with. In the experiments, it was compared with two other error correction algorithms: one based on the LSTM classification model and the other on the traditional unidirectional LSTM translation model. The results indicated that the three error correction algorithms exhibited little difference in detection performance when faced with distinct corpus databases. Furthermore, when dealing with the same …corpus database, the bidirectional LSTM algorithm demonstrated the most robust detection performance, followed by the one based on the traditional unidirectional LSTM translation model, and lastly, the one based on the LSTM classification model performed the least effectively. Show more
Keywords: English translation, translation error correction, grammatical error correction, long short-term memory
DOI: 10.3233/IDT-240111
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-7, 2024
Authors: Shelar, Avadhut | Moharir, Minal
Article Type: Research Article
Abstract: The complicated nature of legal texts, a lack of labeled data, concerns about fairness, and difficulties with interpretation represent some of the challenges that judicial judgment prediction models encounter. The approach we propose seeks to conquer these challenges by using advanced techniques for deep learning, such as deep Bidirectional Long Short-Term Memory (BiLSTM) networks to recognize complex linguistic patterns and transfer learning to make more efficient use of data. Employing a deep BiLSTM classifier (TWO-BiLSTM) model based on Texas wolf optimization, the research aims to predict legal judgments. To prepare it for evaluation, it initially collects and preprocesses judicial data. …Feature extraction involves statistical and Principal component Analysis (PCA) techniques to generate an extensive feature set. The model undergoes training utilizing these features in addition to preprocessed data. A hybrid Texas wolf optimization tactic, based on the optimization of gray wolves and Harris hawks, is employed to boost performance. The ability of the model to accurately and effectively predict legal judgment has been demonstrated by testing it on different sets of judicial data. The model achieved reasonably well in TP 90, having an accuracy of 97.00%. It also achieved exceedingly well in f-score, precision, and recall, having scores of 97.29, 97.10, and 97.19, correspondingly. The model’s effectiveness was further demonstrated in the k-fold 10 assessment, which exhibited 96.00% accuracy and robustness. In addition, using f-score, precision, and recall metrics of 96.25, 96.89, and 95.96, respectively, the model showed outstanding performance. These outstanding results demonstrate the model’s effectiveness and dependability for providing accurate predictions. Show more
Keywords: Texas wolf optimization, deep BiLSTM, judgment prediction, pre-processing, feature extraction
DOI: 10.3233/IDT-230566
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-20, 2024
Authors: Kumari, Ritika | Singh, Jaspreeti | Gosain, Anjana
Article Type: Research Article
Abstract: Parkinson’s disease (PD) is a neurodegenerative condition that affects the neurological, behavioral, and physiological systems of the brain. According to the most recent WHO data, 0.51 percent of all fatalities in India are caused by PD. It is a widely recognized fact that about one million people in the United States suffer from PD, relative to nearly five million people worldwide. Approximately 90% of Parkinson’s patients have speech difficulties. As a result, it is crucial to identify PD early on so that appropriate treatment may be determined. For the early diagnosis of PD, we propose a Bagging-based hybrid (B-HPD) approach in …this study. Seven classifiers such as Random Forest (RF), Decision Tree (DT), Logistic Regression (LR), Naïve Bayes (NB), K nearest neighbor (KNN), Random Under-sampling Boost (RUSBoost) and Support Vector Machine (SVM) are considered as base estimators for Bagging ensemble method and three oversampling techniques such as Synthetic Minority Oversampling Technique (SMOTE), Adaptive Synthetic (ADASYN) and SVMSmote are implemented under this research work. Feature Selection (FS) is also used for data preprocessing and further performance enhancement. We obtain the Parkinson’s Disease classification dataset (imbalanced) from the Kaggle repository. Finally, using two performance measures: Accuracy and Area under the curve (AUC), we compare the performance of the model with ALL features and with selected features. Our study suggests bagging with a base classifier: RF is showing the best performance in all the cases (with ALL features: 754, with FS: 500, with three Oversampling techniques) and may be used for PD diagnosis in the healthcare industry. Show more
Keywords: Ensemble methods, boosting, bagging, hybrid approaches, classification
DOI: 10.3233/IDT-230331
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: Khushiyant, | Mathur, Vidhu | Kumar, Sandeep | Shokeen, Vikrant
Article Type: Research Article
Abstract: The architecture of REEGNet (Recurrent EEG Network) combines convolutional blocks from EEGNet with long short-term memory (LSTM) layers in order to improve temporal modeling capabilities. Additionally, the use of depthwise convolutions enables REEGNet to learn spatial filters specific to distinct frequency information. When evaluated on the exceptionally challenging IMAGENET configuration of the MindBigData Brain Dataset, consisting of 70,060 samples across 5 channels sampled at 128 Hz, REEGNet outperformed state-of-the-art models including EEGNet and LSTM architectures across all class configurations ranging from 2 to 40 classes. Specifically, REEGNet surpasses EEGNet by a margin of 10.6% in multi-class classification scenarios such as …configurations with 10 classes, achieving an accuracy of 27.9%. These results highlight REEGNet’s potential to increase the accessibility of electroencephalography (EEG) analysis by allowing for reliable classification without extensive computational resources. This makes REEGNet uniquely suited for deployment in resource-constrained real-world environments and applications. Show more
Keywords: Healthcare, signal processing, EEG, AI, LSTM, BCI, neuroinformatics
DOI: 10.3233/IDT-230715
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Juneja, Shallu | Bhathal, Gurjit Singh | Sidhu, Brahmaleen K.
Article Type: Research Article
Abstract: Software fault prediction is a crucial task, especially with the rapid improvements in software technology and increasing complexity of software. As identifying and addressing bugs early in the development process can significantly minimize the costs and enhance the software quality. Software fault prediction using machine learning algorithms has gained significant attention due to its potential to improve software quality and save time in the testing phase. This research paper investigates the impact of classification models on bug prediction performance and explores the use of bio-inspired optimization techniques to enhance model results. Through experiments, it is demonstrated that applying bio-inspired algorithms …improves the accuracy of fault prediction models. The evaluation is based on multiple performance metrics and the results show that KNN with BACO (Binary Ant Colony Optimization) generally outperform the other models in terms of accuracy. The BACO-KNN fault prediction model attains the accuracy of 96.39% surpassing the previous work. Show more
Keywords: Fault prediction, classifiers, bio-inspired optimization algorithms, binary ant colony optimization (BACO) performance metrics
DOI: 10.3233/IDT-230427
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-22, 2024
Authors: Nayak, Gouri Sankar | Mallick, Pradeep Kumar | Padhi, Neelmadhab | Mohanty, Manas Ranjan | Kumar, Sachin | Balaji, Prasanalakshmi
Article Type: Research Article
Abstract: In the field of brain MRI analysis, image segmentation serves various purposes such as quantifying and visualizing anatomical structures, analyzing brain changes, delineating pathological regions, and aiding in surgical planning and image-guided interventions. Over the past few decades, diverse segmentation techniques with varying degrees of accuracy and complexity have been developed. Real-world brain MRI images often encounter intensity in homogeneity, posing a significant challenge in accurate segmentation. The prevailing image segmentation algorithms, predominantly region-based, typically rely on the homogeneity of image intensities in specific regions of interest. However, these methods often fall short of providing precise segmentation results due to …intensity in homogeneity. To address these challenges and enhance segmentation performance, this paper introduce a novel objective function named Fuzzy Entropy Clustering with Local Spatial Information and Bias Correction (FECSB). Additionally, we propose a novel hybrid algorithm that combines Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO) to maximize the effectiveness of the FECSB function in MRI brain image segmentation. The proposed algorithm undergoes rigorous evaluation using benchmark MRI brain images, including those from the McConnell Brain Imaging Center (BrainWeb). The experimental results unequivocally demonstrate the superiority of the PSO-GWO clustering method over the traditional Fuzzy C Means (FCM) method. Across various image slices, the PSO-GWO method consistently outperforms FCM in terms of accuracy, showing improvements ranging from 1.28% to 1.46%, approximately achieving 99.37% accuracy. Show more
Keywords: BrainWeb, FECSB, GWO, INU, MRI, PSO and FCM
DOI: 10.3233/IDT-230773
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Sharma, Sandeep | Singh, Manjinder | Mittal, Amit | Aggarwal, Arun
Article Type: Research Article
Abstract: Young people in India are becoming more accepting of the profession of entrepreneurship. While the creation, financing, and positive and negative outcomes of a start-up have all been widely researched, more research is needed to understand the entrepreneurial environment that supports, nurtures, and educates entrepreneurs. Higher education institutions are one such element of this ecosystem that is essential in educating India’s future business people. Therefore, this study is intended to establish the practice of entrepreneurship learning in higher educational institutions in India and to analyze the perception of students on entrepreneurship education. Using proportional stratified random selection, 345 undergraduate students from …India’s top-ranked universities made up the overall sample for this. SEM analysis was used to assess the study’s hypotheses, and the link between the variables was investigated using SEM analysis. The results showed thatgovernment regulation, socioeconomic circumstances, entrepreneurial orientation, monetary assistance, and entrepreneurial confidence favorably impact entrepreneurial motivation. The findings of this study will aid in the modification and customization of entrepreneurial programs to satisfy the needs of different organizations and geographical regions. Show more
Keywords: Entrepreneurship education, higher education institutions, startup-business, students’ perception, entrepreneurial motivation
DOI: 10.3233/IDT-230726
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
Authors: Wang, Honggang | Ji, Xin | Zhao, Xiaolong | He, Yude | Yu, Ting
Article Type: Research Article
Abstract: In addressing the challenges of scattered data and limited professional knowledge in traditional power data quality assessment and verification governance, our approach leveraged natural language processing (NLP) technology for text preprocessing, incorporating power system research findings and case analyses. Named entity recognition enhanced entity identification accuracy, while relationship weighting technology facilitated entity classification and relationship weight assignment. The resulting power system knowledge graph was seamlessly integrated into a graph database for real-time updates. Through the synergistic use of relationship weighting technology and graph convolutional networks (GCN), our method achieved precise representation and modeling of power system knowledge graphs. The research …outcomes underscored the method’s exceptional performance in real-time anomaly detection, maintaining an anomaly detection rate between 1% and 10% and accuracy fluctuating from 80% to 98%. This method stands as a testament to its efficacy in processing power data and validating its robustness in power system assessment and verification. Show more
Keywords: Knowledge graph, quality assessment, verification governance, power data, natural language processing, graph convolutional networks
DOI: 10.3233/IDT-240054
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Jagdale, Balaso | Sugave, Shounak Rushikesh | Kulkarni, Yogesh R. | Gutte, Vitthal
Article Type: Research Article
Abstract: Nowadays, the Internet of Things (IoT) is widely used in many applications, including healthcare to monitor the health condition of patients. However, it faces privacy and security issues due to the massive growth of hacking systems that provide illegal access to confidential health information. Blockchain (BC) is applied in IoT healthcare systems to manage healthcare data securely by using transparency features. In this research, a novel CAViaR Jellyfish Swarm Optimization enabled Quantum Convolutional Neural Network (CJSO-QCNN) is developed for the removal of noise in a privacy-aware BC-based IoT healthcare system. The CJSO is the combination of Conditional Autoregressive Value at …Risk (CAViaR) and Jellyfish Search Optimizer (JSO). The data privacy is ensured according to the user preference by the service provider. The data is classified initially for identifying the sensitive data and is allowed for the treatment of noise, which is then stored in BC. Later, the user accesses the data by removing the noise using the CJSO-QCNN model. The valid data credentials are stored at the service provider according to user preferences. In addition, the superiority of the designed model is computed by comparing the performance with other prevailing approaches. The experimental results revealed that the CJSO-QCNN attained a maximum accuracy of 88.79%, a True Positive Rate (TPR) of 88.20%, and a True Negative Rate (TNR) of 88.61%. Show more
Keywords: Quantum convolutional neural network, conditional autoregressive value at risk, jellyfish search optimizer, consensus unit, blockchain
DOI: 10.3233/IDT-230386
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Zhang, Qin | Wu, Han | Ma, Chi | Wang, Yuebin | Zheng, Xiangyang
Article Type: Research Article
Abstract: In traditional research, monitoring data and samples are limited, and it is difficult to achieve ideal results in real-time monitoring and rapid response to environmental risks. By leveraging extensive environmental data gathered from nuclear power plants, the research employed machine learning methodologies for accurate feature selection and extraction of environmental parameters. An efficient environmental risk assessment model was successfully established by using a random forest algorithm. The 95% confidence interval for the area under the curve value spanned from 0.6894 to 0.9292. This provided a more dynamic and effective means for assessing and managing the environmental risks of nuclear power …plants. Show more
Keywords: Environmental risk assessment model, monitoring data and samples, big data analysis, nuclear power plants environmental, machine learning
DOI: 10.3233/IDT-240041
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-11, 2024
Authors: Gaba, Priyanka | Panwar, Arvind | Sugandh, Urvashi | Pathak, Nitish | Sharma, Neelam
Article Type: Research Article
Abstract: The use of electric vehicles has raised need for infrastructure for effective charging. Wait periods at charging stations are a problem for both owners and operators of electric vehicles. Owners of electric vehicles are irritated by lengthy wait periods, and charging facilities are underutilised. Wait times at electric car charging stations are decreased using “OptiCharge,” a Firefly Algorithm-based solution. Scheduling charging stations makes sense given the Firefly Algorithm’s ability to adjust to changing conditions and solve challenging problems. In the present paper, we incorporate dynamic scheduling and waiting time computation mathematically into OptiCharge. Extensive testing and comparative analysis with different …optimisation techniques demonstrate that OptiCharge decreases waiting times and enhances charging station performance. The results show how OptiCharge may enhance EV charging and promote intelligent, sustainable transportation. Show more
Keywords: Electric Vehicle (EVs), Firefly Algorithm (FA), OptiCharge, real-time charging demands, electric mobility, resource allocation
DOI: 10.3233/IDT-230619
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
Authors: Guo, Qianzi | Mu, Lin | Lou, Shuai
Article Type: Research Article
Abstract: This study delves into the intricacies of booking behaviors, focusing on the preference for tropical coastal cities during colder seasons, given the burgeoning need for personalized and intelligent travel recommendations in the modern tourism landscape. Traditional booking systems, with their static algorithms, have often neglected the dynamic interplay of environmental factors and user preferences, necessitating an evolution towards more sophisticated, adaptive models. To address this, the research employed advanced machine learning and artificial intelligence techniques, amalgamating data from diverse sources like major tourism websites, hotel platforms, and social media. The methodology involved meticulous data preprocessing to ensure data quality and …depth, followed by the implementation of deep learning models and time series analysis, which were instrumental in deciphering complex patterns and offering predictive insights. This comprehensive approach facilitated the integration of various environmental factors such as weather conditions and temporal events, enabling the system to craft hyper-personalized recommendations that resonate with user preferences and environmental nuances. The findings were illuminating, unveiling a significant inverse relationship between temperature and bookings and highlighting the influential role of temporal events in driving bookings. The advanced predictive models showcased commendable accuracy, underscoring the transformative potential of such intelligent systems in enhancing user experiences and engagement in the tourism sector. These insights pave the way for the development of innovative strategies that can significantly elevate user satisfaction and industry growth. The study concludes that the integration of machine learning and AI in booking systems can revolutionize the tourism industry by offering unprecedented levels of personalization and adaptability, catering to the diverse and evolving needs of modern travelers. Show more
Keywords: Intelligent booking systems, machine learning, user preferences, environmental factors, travel behavior, personalization, deep learning, tourism industry
DOI: 10.3233/IDT-230625
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: D, Vishnu Sakthi | V, Valarmathi | V, Surya | A, Karthikeyan | E, Malathi
Article Type: Research Article
Abstract: The current state of economic, social ideas, and the advancement of cutting-edge technology are determined by the primary subjects of the contemporary information era, big data. People are immersed in a world of information, guided by the abundance of data that penetrates every element of their surroundings. Smart gadgets, the IoT, and other technologies are responsible for the data’s explosive expansion. Organisations have struggled to store data effectively throughout the past few decades. This disadvantage is related to outdated, expensive, and inadequately large storage technology. In the meanwhile, large data demands innovative storage techniques supported by strong technology. This paper …proposes the bigdata clustering and classification model with improved fuzzy-based Deep Architecture under the Map Reduce framework. At first, the pre-processing phase involves data partitioning from the big dataset utilizing an improved C-Means clustering procedure. The pre-processed big data is then handled by the Map Reduce framework, which involves the mapper and reducer phases. In the mapper phase. Data normalization takes place, followed by the feature fusion approach that combines the extracted features like entropy-based features and correlation-based features. In the reduction phase, all the mappers are combined to produce an acceptable feature. Finally, a deep hybrid model, which is the combination of a DCNN and Bi-GRU is used for the classification process. The Improved score level fusion procedure is used in this case to obtain the final classification result. Moreover, the analysis of the proposed work has proved to be efficient in terms of classification accuracy, precision, recall, FNR, FPR, and other performance metrics. Show more
Keywords: Map reduce framework, improved feature fusion, normalization, fuzzy C-means clustering
DOI: 10.3233/IDT-230537
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-30, 2024
Authors: K M M, Parvathraj | B K, Anoop
Article Type: Research Article
Abstract: In today’s digital era, the security of sensitive data, particularly in the realm of multimedia, is of paramount importance. Image encryption serves as a vital shield against unauthorized access and ensures the confidentiality and integrity of visual information. As such, the continuous pursuit of robust and efficient encryption techniques remains a pressing concern. This research introduces a Temper Wolf Hunt Optimization enabled Generative Adversarial Network Encryption model (TWHO-GAN), designed to address the challenges of image encryption in the modern digital landscape. TWHO, inspired by the collective hunting behavior of wolf and coyote packs, is employed to generate highly secure encryption …keys. This algorithm excels in exploring complex solution spaces, creating robust, attack-resistant keys. In TWHO-GAN model, GANs are employed to create encrypted images that are virtually indistinguishable from their original counterparts, adding a layer of security by generating complex encryption keys and ensuring robust protection against attacks. The GAN component reconstructs the encrypted images to their original form when decrypted with the correct keys, ensuring data integrity while maintaining confidentiality. Further, the significance of the proposed model relies on the TWHO algorithm formulated by the integration of the adaptability and coordinated hunting strategies to optimize the chaotic map generation in image encryption protecting the sensitive visual information from unauthorized access as well as potential threats. Through extensive experimentation and comparative analysis, TWHO-GAN demonstrates superior performance in image encryption, surpassing former methods in terms of C s , 𝐻𝑖𝑠 C , MSE, PSNR, RMSE, and SSIM attaining values of 0.93, 94.19, 3.274, 59.70 dB, 1.8095, and 0.940 respectively for 5 numbers of images. Moreover, the TWHO-GAN approach attained the values of 0.91,92.22, 2.03, 49.74 dB, 1.42, and 0.88 for Cs, HisC, MSE, PSNR, RMSE, and SSIM respectively utilizing the Airplanes dataset. The model exhibits robust resistance to various attacks, making it a compelling choice for secure image transmission and storage. Show more
Keywords: Image encryption, temper wolf hunt optimization, generative adversarial network, encrypted image
DOI: 10.3233/IDT-230547
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-26, 2024
Authors: Al-Obaidi, Sumia Abdulhussien Razooqi | Lighvan, Mina Zolfy | Asadpour, Mohammad
Article Type: Research Article
Abstract: With the surging prominence of digital communication platforms, there has been an intensified emphasis on ensuring robust security and privacy measures. Against this backdrop, image steganalysis has emerged as a critical discipline, employing advanced methods to detect clandestine data within image files. At the core of our research is an innovative exploration into image steganalysis using an amalgamation of enhanced reinforcement learning techniques and online data augmentation. This methodology ensures the meticulous identification of concealed data within images. Our design integrates triple parallel dilated convolutions, enabling concurrent extraction of feature vectors from the input images. Once extracted, these vectors are …synthesized, paving the way for subsequent classification tasks. To substantiate the efficacy of our approach, we conducted tests on a comprehensive dataset sourced from BossBase 1.01. Furthermore, to discern the influence of transfer learning on our proposed model, the BOWS dataset was employed. Notably, these datasets present a challenge due to its inherent imbalance. To counteract this, we incorporated an advanced Reinforcement Learning (RL) framework. Herein, the dataset samples are envisioned as states in a sequence of interrelated decisions, with the neural network playing the role of the decision-making agent. This agent is then incentivized or reprimanded based on its accuracy in discerning between the minority and majority classes. To bolster our classification capabilities, we innovatively employed data augmentation using images generated by a Generative Adversarial Network (GAN). Concurrently, a regularization mechanism was instituted to alleviate prevalent GAN-related challenges, such as mode collapse and unstable training dynamics. Our experimental outcomes underscore the potency of our methodology. The results highlight a remarkable capability to discern between pristine and steganographic images, registering an average accuracy rate of 85%. Show more
Keywords: Image steganalysis, reinforcement learning, generative adversarial network, imbalanced classification, transfer learning
DOI: 10.3233/IDT-240075
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-24, 2024
Authors: Gkioka, Georgia | Dominguez, Monica | Mentzas, Gregoris
Article Type: Research Article
Abstract: In the realm of modern urban mobility, automatic incident detection is a critical element of intelligent transportation systems (ITS), since the ability to promptly identify unexpected events allows for quick implementation of preventive measures and efficient response to the situations as they arise. With the growing availability of traffic data, Machine Learning (ML) has become a vital tool for enhancing traditional incident detection methods. Automated machine-learning (AutoML) techniques present a promising solution by streamlining the machine-learning process; however the application of AutoML for incident detection has not been widely explored in scientific research In this paper, we propose and apply …an AutoML-based methodology for traffic incident detection and compare it with state-ofthe-art ML approaches. Our approach integrates data preprocessing with AutoML, and uses Tree-based Pipeline Optimization Tool (TPOT) to refine the process from raw data to prediction. We have tested the efficiency of our approach in two major European cities, Athens and Antwerp. Finally, we present the limitations of our work and outline recommendations for application of AutoML in the incident detection task and potentially in other domains. Show more
Keywords: Incident detection, automated machine learning (AutoML), artificial intelligence (AI), intelligent transport systems (ITS), big data
DOI: 10.3233/IDT-240231
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-22, 2024
Authors: Vasantham, Vijay Kumar | Donavalli, Haritha
Article Type: Research Article
Abstract: The new and rising paradigm of cloud computing offers customers various possibilities of task computation based on their desires and choices. Customers receive services from cloud computing systems as a utility. Customers are enthusiastic about low-cost service availability and task completion times that are kept to be minimum. To achieve client fulfilment, the service provider must schedule the jobs to the right resources if the cloud server gets many user requests. The rapid growth in data volume necessitates petabytes processing of data each day. Unstructured, semi-structured, and structured data are all described in terms of their rapid growth and availability. …In order to make correct and timely decisions, it must be processed appropriately. In this research, we present BWUJS (Black Widow Updated Jellyfish Search), a multi-objective hybrid optimization-based task scheduling algorithm. This work considers task generation from the Bigdata perspective. The clustering of tasks is performed via the Map Reduce framework with an Improved K-means clustering model. After task clustering, the task priority estimation is performed. Finally, the scheduling is performed via BWJSU based on certain constraints like priority, makespan, completion time, resource utilization, and degree of imbalance. Show more
Keywords: Task scheduling, big data, BWUJS, map reduce, cloud computing
DOI: 10.3233/IDT-230717
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
Authors: T. Bhuvan, Nikhila | G, Jisha | N V, Shamna
Article Type: Research Article
Abstract: Generally, multiple choice questions are an effective and extensive form used in standard tests in order to evaluate the learner’s skills and knowledge. Nonetheless, the multiple-choice question composition particularly the distractor construction is quite difficult. The distracters are needed to be both plausible and inappropriate and adequate to mystify the learners who did not master the information. Thus, the distractor generation emergence is important that can help several standard tests in an extensive range of domain. In this research, question-answer generation system is developed with a distractor model by developing an optimized T5 model. At first, BERT tokenization is used …to pre-process the passage/context and question, which are given as the input to train the approach. Then, the question and answer generation is performed by utilizing the T5 approach that is trained by proposed Serial Exponential-Slime Mould approach (SExpSMA). Exponential weighted moving average is extended to Serial Exponential weighted moving average and incorporated in Slime Mould Algorithm (SMA) to propose SExpSMA. In addition, the proposed SExpSMA-based T5 model is employed to generate distractors for the questions. Eventually, experimentation analysis exhibits that proposed SExpSMA-based T5 model achieves better outcomes regarding the metrics, like ROUGE, BLEU, and METEOR with the values of 0.919, 0.918, 0.488, respectively. Show more
Keywords: BERT, distractor generators, optimization, question and answering, T5 model
DOI: 10.3233/IDT-230629
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Shaik, Allabaksh | Basha, Shaik Mahaboob
Article Type: Research Article
Abstract: Anomaly detection is a branch of behavior understanding in surveillance scenes, where anomalies represent a deviation in the behavior of scene entities (viz.,humans, vehicles, and environment) from regular patterns. In pedestrian walkways, this plays a vital role in enhancing safety. With the widespread use of video surveillance systems and the escalating video volume, manual examination of abnormal events becomes time-intensive.Hence, the need for an automated surveillance system adept at anomaly detection is crucial, especially within the realm of computer vision (CV) research. The surge in interest towards deep learning (DL) algorithms has significantly impacted CV techniques, including object detection and …classification. Unlike traditional reliance on supervised learning requiring labeled datasets, DL offers advancements in these applications. Thus, this study presents an Optimal Deep Transfer Learning Enabled Object Detector for Anomaly Recognition in Pedestrian Ways (ODTLOD-ARPW) technique. The purpose of the ODTLOD-ARPW method is to recognize the occurrence of anomalies in pedestrian walkways using a DL-based object detector. In the ODTLOD-ARPW technique, the image pre-processing initially takes place using two sub-processes namely Wiener filtering (WF) based pre-processing and dynamic histogram equalization-based contrast enhancement. For anomaly detection, the ODTLOD-ARPW technique employs the YOLOV8s model which offers enhanced accuracy and performance. The hyperparameter tuning process takes place using a root mean square propagation (RMSProp) optimizer. The performance analysis of the ODTLOD-ARPW method is tested under the UCSD anomaly detection dataset. An extensive comparative study reported that the ODTLOD-ARPW technique reaches an effective performance with other models with maximum accuracy of 98.67%. Show more
Keywords: Anomaly detection, pedestrian ways, hyperparameter tuning, deep learning, wiener filtering
DOI: 10.3233/IDT-240040
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-16, 2024
Authors: Linardatos, Pantelis | Papastefanopoulos, Vasilis | Kotsiantis, Sotiris
Article Type: Research Article
Abstract: Time series forecasting is the process of predicting future values of a time series based on its historical data patterns. It is a critical task in many domains, including finance, supply chain management, the environment, and more as accurate forecasts can help businesses and organizations make better decisions and improve their metrics. Although there have been significant advances in time series forecasting systems, thanks to the development of new machine learning algorithms, hardware improvements, and the increasing availability of data, it remains a challenging task. Common pitfalls, especially of single-model approaches include susceptibility to noise and outliers and inability to …handle non-stationary data, which can lead to inaccurate and non-robust forecasts. Model-combining approaches, such as averaging the results of multiple predictors to produce a final forecast, are commonly used to mitigate such issues. This work introduces a novel application of Cascade Generalization or Cascading for time series forecasting, where multiple predictors are used sequentially, with each predictor’s output serving as additional input for the next. This methodology aims to overcome the limitations of single-model forecasts and traditional ensembles by incorporating a progressive learning mechanism. We adapt Cascade Generalization specifically for time series data, detailing its implementation and potential for handling complex, dynamic datasets. Our approach was systematically evaluated against traditional two-model averaging ensembles across ten diverse datasets, employing the Root Mean Square Error (RMSE) metric for performance assessment. The results revealed that cascading tends to outperform voting ensembles in most cases. This consistent trend suggests that cascading can be considered a reliable alternative to voting ensembles, showcasing its potential as an effective strategy for improving time series forecasting across a wide range of scenarios. Show more
Keywords: Time series, forecasting, machine learning, cascade generalization, stacking generalization, stacking, cascade, cascading, ensemble, regressor
DOI: 10.3233/IDT-240224
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-18, 2024
Authors: Zhang, Guangtao
Article Type: Research Article
Abstract: In order to solve the problem of low computing efficiency in big data analysis and model construction, this paper intended to deeply explore the big data analysis programming model, DAG (Directed Acyclic Graph) and other contents, and on this basis, it adopted a distributed matrix computing system Octopus for big data analysis. Octopus is a universal matrix programming framework that provides a programming model based on matrix operations, which can conveniently analyze and process large-scale data. By using Octopus, users can extract functions and data from multiple platforms and operate through a unified matrix operation interface. The distributed matrix representation …and storage layer can design data storage formats for distributed file systems. Each computing platform in OctMatrix provides its own matrix library, and it provides a matrix library written in R language for the above users. SymboMatrix provides a matrix interface to OctMatrix that is consistent with OctMatrix. However, SymboMatrix also retains the flow diagram for matrix operations in the process, and it also supports logical and physical optimization of the flow diagram on a DAG. For the DAG computational flow graph generated by SymbolMatrix, this paper divided it into two parts: logical optimization and physical optimization. This paper adopted a distributed file system based on line matrix, and obtained the corresponding platform matrix by reading the documents based on line matrix. In the evaluation of system performance, it was found that the distributed matrix computing system had a high computing efficiency, and the average CPU (central processing unit) usage reached 70%. This system can make full use of computing resources and realize efficient parallel computing. Show more
Keywords: Big data analysis, distributed matrix computing system, data management, matrix segmentation, historical data
DOI: 10.3233/IDT-230309
Citation: Intelligent Decision Technologies, vol. Pre-press, no. Pre-press, pp. 1-17, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]