WRANet: wavelet integrated residual attention U-Net network for medical image segmentation
Medical image segmentation is crucial for the diagnosis and analysis of disease. Deep convolutional neural network methods have achieved great success in medical image segmentation. However, they are highly susceptible to noise interference during the propagation of the network, where weak noise can dramatically alter the network output. As the network deepens, it can face problems such as gradient explosion and vanishing. To improve the robustness and segmentation performance of the network, we propose a wavelet residual attention network (WRANet) for medical image segmentation. We replace the standard downsampling modules (e.g., maximum pooling and average pooling) in CNNs with discrete wavelet transform, decompose the features into low- and high-frequency components, and remove the high-frequency components to eliminate noise. At the same time, the problem of feature loss can be effectively addressed by introducing an attention mechanism. The combined experimental results show that our method can effectively perform aneurysm segmentation, achieving a Dice score of 78.99%, an IoU score of 68.96%, a precision of 85.21%, and a sensitivity score of 80.98%. In polyp segmentation, a Dice score of 88.89%, an IoU score of 81.74%, a precision rate of 91.32%, and a sensitivity score of 91.07% were achieved. Furthermore, our comparison with state-of-the-art techniques demonstrates the competitiveness of the WRANet network.
Selection of suitable biomass conservation process techniques: a versatile approach to normal wiggly interval-valued hesitant fuzzy set using multi-criteria decision making
A country that relies on developing industrialization and GDP requires a lot of energy. Biomass is emerging as one of the possible renewable energy resources that may be used to generate energy. Through the proper channels, such as chemical, biochemical, and thermochemical processes, it can be turned into electricity. In the context of India, the potential sources of biomass can be broken down into agricultural waste, tanning waste, sewage, vegetable waste, food, meat waste, and liquor waste. Each form of biomass energy so extracted has advantages and downsides, so determining which one is best is crucial to reaping the most benefits. The selection of biomass conversion methods is especially significant since it requires a careful study of multiple factors, which can be aided by fuzzy multi-criteria decision-making (MCDM) models. This paper proposes the normal wiggly interval-valued hesitant fuzzy-based decision-making trial and evaluation laboratory model (DEMATEL) and the Preference Ranking Organization METHod for Enrichment of Evaluations II (PROMETHEE) for assessing the problem of determining a workable biomass production technique. The proposed framework is used to assess the production processes under consideration based on parameters such as fuel cost, technical cost, environmental safety, and emission levels. Bioethanol has been developed as a viable industrial option due to its low carbon footprint and environmental viability. Furthermore, the superiority of the suggested model is demonstrated by comparing the results to other current methodologies. According to comparative study, the suggested framework might be developed to handle complex scenarios with many variables.
A biologically inspired decision-making system for the autonomous adaptive behavior of social robots
The decisions made by social robots while they fulfill their tasks have a strong influence on their performance. In these contexts, autonomous social robots must exhibit adaptive and social-based behavior to make appropriate decisions and operate correctly in complex and dynamic scenarios. This paper presents a Decision-Making System for social robots working on long-term interactions like cognitive stimulation or entertainment. The Decision-making System employs the robot's sensors, user information, and a biologically inspired module to replicate how human behavior emerges in the robot. Besides, the system personalizes the interaction to maintain the users' engagement while adapting to their features and preferences, overcoming possible interaction limitations. The system evaluation was in terms of usability, performance metrics, and user perceptions. We used the Mini social robot as the device where we integrated the architecture and carried out the experimentation. The usability evaluation consisted of 30 participants interacting with the autonomous robot in 30 min sessions. Then, 19 participants evaluated their perceptions of robot attributes of the Godspeed questionnaire by playing with the robot in 30 min sessions. The participants rated the Decision-making System with excellent usability (81.08 out of 100 points), perceiving the robot as intelligent (4.28 out of 5), animated (4.07 out of 5), and likable (4.16 out of 5). However, they also rated Mini as unsafe (security perceived as 3.15 out of 5), probably because users could not influence the robot's decisions.
Fake news detection based on a hybrid BERT and LightGBM models
With the rapid growth of social networks and technology, knowing what news to believe and what not to believe become a challenge in this digital era. Fake news is defined as provably erroneous information transmitted intending to defraud. This kind of misinformation poses a serious threat to social cohesion and well-being, since it fosters political polarisation and can destabilize trust in the government or the service provided. As a result, fake news detection has emerged as an important field of study, with the goal of identifying whether a certain piece of content is real or fake. In this paper, we propose a novel hybrid fake news detection system that combines a BERT-based (bidirectional encoder representations from transformers) with a light gradient boosting machine (LightGBM) model. We compare the performance of the proposed method to four different classification approaches using different word embedding techniques on three real-world fake news datasets to validate the performance of the proposed method compared to other methods. The proposed method is evaluated to detect fake news based on the headline-only or full text of the news content. The results show the superiority of the proposed method for fake news detection compared to many state-of-the-art methods.
A stereo spatial decoupling network for medical image classification
Deep convolutional neural network (CNN) has made great progress in medical image classification. However, it is difficult to establish effective spatial associations, and always extracts similar low-level features, resulting in redundancy of information. To solve these limitations, we propose a stereo spatial discoupling network (TSDNets), which can leverage the multi-dimensional spatial details of medical images. Then, we use an attention mechanism to progressively extract the most discriminative features from three directions: horizontal, vertical, and depth. Moreover, a cross feature screening strategy is used to divide the original feature maps into three levels: important, secondary and redundant. Specifically, we design a cross feature screening module (CFSM) and a semantic guided decoupling module (SGDM) to model multi-dimension spatial relationships, thereby enhancing the feature representation capabilities. The extensive experiments conducted on multiple open source baseline datasets demonstrate that our TSDNets outperforms previous state-of-the-art models.
A distribution information sharing federated learning approach for medical image data
In recent years, federated learning has been believed to play a considerable role in cross-silo scenarios (e.g., medical institutions) due to its privacy-preserving properties. However, the non-IID problem in federated learning between medical institutions is common, which degrades the performance of traditional federated learning algorithms. To overcome the performance degradation problem, a novelty distribution information sharing federated learning approach (FedDIS) to medical image classification is proposed that reduce non-IIDness across clients by generating data locally at each client with shared medical image data distribution from others while protecting patient privacy. First, a variational autoencoder (VAE) is federally trained, of which the encoder is uesd to map the local original medical images into a hidden space, and the distribution information of the mapped data in the hidden space is estimated and then shared among the clients. Second, the clients augment a new set of image data based on the received distribution information with the decoder of VAE. Finally, the clients use the local dataset along with the augmented dataset to train the final classification model in a federated learning manner. Experiments on the diagnosis task of Alzheimer's disease MRI dataset and the MNIST data classification task show that the proposed method can significantly improve the performance of federated learning under non-IID cases.
A new multi-attribute decision making approach based on new score function and hybrid weighted score measure in interval-valued Fermatean fuzzy environment
Interval-valued Fermatean fuzzy sets (IVFFSs) were introduced as a more effective mathematical tool for handling uncertain information in 2021. In this paper, firstly, a novel score function (SCF) is proposed based on IVFFNs that can distinguish between any two IVFFNs. And then, the novel SCF and hybrid weighted score measure were used to construct a new multi-attribute decision-making (MADM) method. Besides, three cases are used to demonstrate that our proposed method can overcome the disadvantages that the existing approaches cannot obtain the preference orderings of alternatives in some circumstances and involves the existence of division by zero error in the decision procedure. Compared with the two existing MADM methods, our proposed approach has the highest recognition index and the lowest error rate of division by zero. Our proposed method provides a better approach to dealing with the MADM problem in the interval-valued Fermatean fuzzy environment.
A fuzzy rough copula Bayesian network model for solving complex hospital service quality assessment
Healthcare tends to be one of the most complicated sectors, and hospitals exist at the core of healthcare activities. One of the most significant elements in hospitals is service quality level. Moreover, the dependency between factors, dynamic features, as well as objective and subjective uncertainties involved endure challenges to modern decision-making problems. Thus, in this paper, a decision-making approach is developed for hospital service quality assessment, using a Bayesian copula network based on a fuzzy rough set within neighborhood operators as a basis of that to deal with dynamic features as well as objective uncertainties. In the copula Bayesian network model, the Bayesian Network is utilized to illustrate the interrelationships between different factors graphically, while Copula is engaged in obtaining the joint probability distribution. Fuzzy rough set theory within neighborhood operators is employed for the subjective treatment of evidence from decision makers. The efficiency and practicality of the designed method are validated by an analysis of real hospital service quality in Iran. A novel framework for ranking a group of alternatives with consideration of different criteria is proposed by the combination of the Copula Bayesian Network and the extended fuzzy rough set technique. The subjective uncertainty of decision makers' opinions is dealt with in a novel extension of fuzzy Rough set theory. The results highlighted that the proposed method has merits in reducing uncertainty and assessing the dependency between factors of complicated decision-making problems.
Picture fuzzy Additive Ratio Assessment Method (ARAS) and VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method for multi-attribute decision problem and their application
The purpose of this paper is to study the multi-attribute decision-making problem under the fuzzy picture environment. First, a method to compare the pros and cons of picture fuzzy numbers (PFNs) is introduced in this paper. Second, the correlation coefficient and standard deviation (CCSD) method is used to determine the attribute weight information under the picture fuzzy environment regardless of whether the attribute weight information is partially unknown or completely unknown. Third, the ARAS and VIKOR methods are extended to the picture fuzzy environment, and the proposed PFNs comparison rules are also applied in the PFS-ARAS and PFS-VIKOR methods. Fourth, the problem of green supplier selection in a picture-ambiguous environment is solved by the method proposed in this paper. Finally, the method proposed in this paper is compared with some methods and the results are analyzed.
Lightweight dense video captioning with cross-modal attention and knowledge-enhanced unbiased scene graph
Dense video captioning (DVC) aims at generating description for each scene in a video. Despite attractive progress for this task, previous works usually only concentrate on exploiting visual features while neglecting audio information in the video, resulting in inaccurate scene event location. In this article, we propose a novel DVC model named CMCR, which is mainly composed of a cross-modal processing (CM) module and a commonsense reasoning (CR) module. CM utilizes a cross-modal attention mechanism to encode data in different modalities. An event refactoring algorithm is proposed to deal with inaccurate event localization caused by overlapping events. Besides, a shared encoder is utilized to reduce model redundancy. CR optimizes the logic of generated captions with both heterogeneous prior knowledge and entities' association reasoning achieved by building a knowledge-enhanced unbiased scene graph. Extensive experiments are conducted on ActivityNet Captions dataset, the results demonstrate that our model achieves better performance than state-of-the-art methods. To better understand the performance achieved by CMCR, we also apply ablation experiments to analyze the contributions of different modules.
Developing explicit customer preference models using fuzzy regression with nonlinear structure
In online sales platforms, product design attributes influence consumer preferences, and consumer preferences also have a significant impact on future product design optimization and iteration. Online review data are the most intuitive feedback from consumers on products. Using the value of online review information to explore consumer preferences is the key to optimize the products, improve consumer satisfaction and meet consumer requirements. Therefore, the study of consumer preferences based on online reviews is of great importance. However, in previous research on consumer preferences based on online reviews, few studies have modeled consumer preferences. The models often suffer from the nonlinear structure and the fuzzy coefficients, making it challenging to build explicit models. Therefore, this study adopts a fuzzy regression approach with a nonlinear structure to model consumer preferences based on online reviews to provide reference and insight for subsequent studies. First, smartwatches were selected as the research object, and the sentiment scores of product reviews under different topics were obtained by text mining on the product online data. Second, a polynomial structure between product attributes and consumer preferences was generated to investigate the association between them further. Afterward, based on the existing polynomial structure, the fuzzy coefficients of each item in the structure were determined by the fuzzy regression approach. Finally, the mean relative error and mean systematic confidence of the fuzzy regression with nonlinear structure method were numerically calculated and compared with fuzzy least squares regression, fuzzy regression, adaptive neuro fuzzy inference system (ANFIS) and K-means-based ANFIS, and it was found that the proposed method was relatively more effective in modeling consumer preferences.
Systematic review of MCDM approach applied to the medical case studies of COVID-19: trends, bibliographic analysis, challenges, motivations, recommendations, and future directions
When COVID-19 spread in China in December 2019, thousands of studies have focused on this pandemic. Each presents a unique perspective that reflects the pandemic's main scientific disciplines. For example, social scientists are concerned with reducing the psychological impact on the human mental state especially during lockdown periods. Computer scientists focus on establishing fast and accurate computerized tools to assist in diagnosing, preventing, and recovering from the disease. Medical scientists and doctors, or the frontliners, are the main heroes who received, treated, and worked with the millions of cases at the expense of their own health. Some of them have continued to work even at the expense of their lives. All these studies enforce the multidisciplinary work where scientists from different academic disciplines (social, environmental, technological, etc.) join forces to produce research for beneficial outcomes during the crisis. One of the many branches is computer science along with its various technologies, including artificial intelligence, Internet of Things, big data, decision support systems (DSS), and many more. Among the most notable DSS utilization is those related to multicriterion decision making (MCDM), which is applied in various applications and across many contexts, including business, social, technological and medical. Owing to its importance in developing proper decision regimens and prevention strategies with precise judgment, it is deemed a noteworthy topic of extensive exploration, especially in the context of COVID-19-related medical applications. The present study is a comprehensive review of COVID-19-related medical case studies with MCDM using a systematic review protocol. PRISMA methodology is utilized to obtain a final set of ( = 35) articles from four major scientific databases (ScienceDirect, IEEE Xplore, Scopus, and Web of Science). The final set of articles is categorized into taxonomy comprising five groups: (1) diagnosis ( = 6), (2) safety ( = 11), (3) hospital ( = 8), (4) treatment ( = 4), and (5) review ( = 3). A bibliographic analysis is also presented on the basis of annual scientific production, country scientific production, co-occurrence, and co-authorship. A comprehensive discussion is also presented to discuss the main challenges, motivations, and recommendations in using MCDM research in COVID-19-related medial case studies. Lastly, we identify critical research gaps with their corresponding solutions and detailed methodologies to serve as a guide for future directions. In conclusion, MCDM can be utilized in the medical field effectively to optimize the resources and make the best choices particularly during pandemics and natural disasters.
Multi-objective two-stage emergent blood transshipment-allocation in COVID-19 epidemic
The problem of blood transshipment and allocation in the context of the COVID-19 epidemic has many new characteristics, such as two-stage, trans-regional, and multi-modal transportation. Considering these new characteristics, we propose a novel multi-objective optimization model for the two-stage emergent blood transshipment-allocation. The objectives considered are to optimize the quality of transshipped blood, the satisfaction of blood demand, and the overall cost including shortage penalty. An improved integer encoded hybrid multi-objective whale optimization algorithm (MOWOA) with greedy rules is then designed to solve the model. Numerical experiments demonstrate that our two-stage model is superior to one-stage optimization methods on all objectives. The degree of improvement ranges from 0.69 to 66.26%.
An integrated group decision-making method for the evaluation of hypertension follow-up systems using interval-valued q-rung orthopair fuzzy sets
It is imperative to comprehensively evaluate the function, cost, performance and other indices when purchasing a hypertension follow-up (HFU) system for community hospitals. To select the best software product from multiple alternatives, in this paper, we develop a novel integrated group decision-making (GDM) method for the quality evaluation of the system under the interval-valued q-rung orthopair fuzzy sets (IVq-ROFSs). The design of our evaluation indices is based on the characteristics of the HFU system, which in turn represents the evaluation requirements of typical software applications and reflects the particularity of the system. A similarity is extended to measure the IVq-ROFNs, and a new score function is devised for distinguishing IVq-ROFNs to figure out the best IVq-ROFN. The weighted fairly aggregation (WFA) operator is then extended to the interval-valued q-rung orthopair WFA weighted average operator (IVq-ROFWFAWA) for aggregating information. The attribute weights are derived using the LINMAP model based on the similarity of IVq-ROFNs. We design a new expert weight deriving strategy, which makes each alternative have its own expert weight, and use the ARAS method to select the best alternative based on these weights. With these actions, a GDM algorithm that integrates the similarity, score function, IVq-ROFWFAWA operator, attribute weights, expert weights and ARAS is proposed. The applicability of the proposed method is demonstrated through a case study. Its effectiveness and feasibility are verified by comparing it to other state-of-the-art methods and operators.
Assessment of regional economic restorability under the stress of COVID-19 using the new interval type-2 fuzzy ORESTE method
The economic implications from the COVID-19 crisis are not like anything people have ever experienced. As predictions indicated, it is not until the year 2025 may the global economy recover to the ideal situation as it was in 2020. Regions lacked of developing category is among the mostly affected regions, because the category includes weakly and averagely potential power. For supporting the decision of economic system recovery scientifically and accurately under the stress of COVID-19, one feasible solution is to assess the regional economic restorability by taking into account a variety of indicators, such as development foundation, industrial structure, labor forces, financial support and government's ability. This is a typical multi-criteria decision-making (MCDM) problem with quantitative and qualitative criteria/indicator. To solve this problem, in this paper, an investigation is conducted to obtain 14 indicators affecting regional economic restorability, which form an indicator system. The interval type-2 fuzzy set (IT2FS) is an effective tool to express experts' subjective preference values (PVs) in the process of decision-making. First, some formulas are developed to convert quantitative PVs to IT2FSs. Second, an improved interval type-2 fuzzy ORESTE (IT2F-ORESTE) method based on distance and likelihood are developed to assess the regional economic restorability. Third, a case study is given to illustrate the method. Then, robust ranking results are acquired by performing a sensitivity analysis. Finally, some comparative analyses with other methods are conducted to demonstrate that the developed IT2F-ORESTE method can supporting the decision of economic system recovery scientifically and accurately.
Can financial stress be anticipated and explained? Uncovering the hidden pattern using EEMD-LSTM, EEMD-prophet, and XAI methodologies
Global financial stress is a critical variable that reflects the ongoing state of several key macroeconomic indicators and financial markets. Predictive analytics of financial stress, nevertheless, has seen very little focus in literature as of now. Futuristic movements of stress in markets can be anticipated if the same can be predicted with a satisfactory level of precision. The current research resorts to two granular hybrid predictive frameworks to discover the inherent pattern of financial stress across several critical variables and geography. The predictive structure utilizes the Ensemble Empirical Mode Decomposition (EEMD) for granular time series decomposition. The Long Short-Term Memory Network (LSTM) and Facebook's Prophet algorithms are invoked on top of the decomposed components to scrupulously investigate the predictability of final stress variables regulated by the Office of Financial Research (OFR). A rigorous feature screening using the Boruta methodology has been utilized too. The findings of predictive exercises reveal that financial stress across assets and continents can be predicted accurately in short and long-run horizons even at the time of steep financial distress during the COVID-19 pandemic. The frameworks appear to be statistically significant at the expense of model interpretation. To resolve the issue, dedicated Explainable Artificial Intelligence (XAI) methods have been used to interpret the same. The immediate past information of financial stress indicators largely explains patterns in the long run, while short-run fluctuations can be tracked by closely monitoring several technical indicators.
Predictive maintenance optimization for industrial equipment via reliable prognosis and risk-aware reinforcement learning
Predictive maintenance (PdM) based on Remaining Useful Life (RUL) prediction plays a crucial role in improving performance and reducing lifecycle costs of industrial equipment. This study proposes an intelligent PdM framework that integrates a RUL prediction model based on probabilistic neural network with a distributional reinforcement learning agent based on QR-DQN. In the first stage, the RUL prediction model is developed to process sensor data to generate accurate RUL predictions, quantify predictive uncertainty, and estimate the probability of failure within a given horizon. Building on the health condition assessment, the QR-DQN agent learns the distribution of long-term maintenance returns and makes sequential decisions among multiple actions. By adopting risk-sensitive decision rules, the agent explicitly accounts for uncertainty and failure risk, achieving a balance between safety, cost efficiency, and timeliness of interventions. Experimental evaluations on complex system degradation demonstrate that the proposed intelligent PdM outperforms conventional baselines by reducing catastrophic failures, optimizing maintenance schedules, and improving overall reliability.
Improving SVM performance through data reduction and misclassification analysis with linear programming
In the dual optimization problem behind Support Vector Machine (SVM), each data point corresponds to a decision variable. Therefore, removing data points is equivalent to reducing the dimensionality of the dual problem, leading to a more efficient optimization process. We introduce linear programming models to determine whether two sets of points are linearly separable efficiently, compute the misclassification rate, and reduce the dimension of the optimization problems behind the SVM procedure. Data reduction can be conducted using a simple convexity property for the linearly separable case. The misclassification rate is a key indicator of the complexity of separating the two sets, providing valuable insights into the classification performance. Our approach combines SVM optimization with linear programming techniques to offer a comprehensive classification and complexity analysis framework.
Verticox+: vertically distributed Cox proportional hazards model with improved privacy guarantees
Federated learning allows us to run machine learning algorithms on decentralized data when data sharing is not permitted due to privacy concerns. Various models have been adapted to use in a federated setting. Among these models is Verticox, a federated implementation of Cox proportional hazards models, which can be used in a vertically partitioned setting. However, Verticox assumes that the survival outcome is known locally by all parties involved in the federated setting. Realistically speaking, this is not the case in most settings and thus would require the outcome to be shared. However, sharing the survival outcome would in many cases be a breach of privacy which federated learning aims to prevent. Our extension to Verticox, dubbed Verticox+, solves this problem by incorporating a privacy preserving 2-party scalar product protocol at different stages. This allows it to be used in scenarios where the survival outcome is not known at each party. In this article, we demonstrate that our algorithm achieves equivalent performance to the original Verticox implementation. We discuss the changes to the computational complexity and communication cost caused by our additions.
Dual graph characteristics of water distribution networks-how optimal are design solutions?
Urban water infrastructures are an essential part of urban areas. For their construction and maintenance, major investments are required to ensure an efficient and reliable function. Vital parts of the urban water infrastructures are water distribution networks (WDNs), which transport water from the production (sources) to the spatially distributed consumers (sinks). To minimize the costs and at the same time maximize the resilience of such a system, multi-objective optimization procedures (e.g., meta-heuristic searches) are performed. Assessing the hydraulic behavior of WDNs in such an optimization procedure is no trivial task and is computationally demanding. Further, deciding how close to optimal design solutions the current solutions are, is difficult to assess and often results in an unnecessary extent of experiment. To tackle these challenges, an answer to the questions is sought: when is an optimization stage achieved from which no further improvements can be expected, and how can that be assessed? It was found that graph characteristics based on complex network theory (number of dual graph elements) converge towards a certain threshold with increasing number of generations. Furthermore, a novel method based on network topology and the demand distribution in WDNs, specifically based on changes in 'demand edge betweenness centrality', for identifying that threshold is developed and successfully tested. With the proposed novel approach, it is feasible, prior to the optimization, to determine characteristics that optimal design solutions should fulfill, and thereafter, test them during the optimization process. Therewith, numerous simulation runs of meta-heuristic search engines can be avoided.
ST-V-Net: incorporating shape prior into convolutional neural networks for proximal femur segmentation
We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance.
