Multivariate time series short term forecasting using cumulative data of coronavirus
Coronavirus emerged as a highly contagious, pathogenic virus that severely affects the respiratory system of humans. The epidemic-related data is collected regularly, which machine learning algorithms can employ to comprehend and estimate valuable information. The analysis of the gathered data through time series approaches may assist in developing more accurate forecasting models and strategies to combat the disease. This paper focuses on short-term forecasting of cumulative reported incidences and mortality. Forecasting is conducted utilizing state-of-the-art mathematical and deep learning models for multivariate time series forecasting, including extended susceptible-exposed-infected-recovered (SEIR), long-short-term memory (LSTM), and vector autoregression (VAR). The SEIR model has been extended by integrating additional information such as hospitalization, mortality, vaccination, and quarantine incidences. Extensive experiments have been conducted to compare deep learning and mathematical models that enable us to estimate fatalities and incidences more precisely based on mortality in the eight most affected nations during the time of this research. The metrics like mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE) are employed to gauge the model's effectiveness. The deep learning model LSTM outperformed all others in terms of forecasting accuracy. Additionally, the study explores the impact of vaccination on reported epidemics and deaths worldwide. Furthermore, the detrimental effects of ambient temperature and relative humidity on pathogenic virus dissemination have been analyzed.
A survey on recent trends in deep learning for nucleus segmentation from histopathology images
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Fuzzy PROMETHEE model for public transport mode choice analysis
The importance of public transportation service quality research is significantly increasing in recent years, it is the key to understanding and analyzing passengers' preferences. Different approaches are utilized to explore users' preferences however, dominantly these apply merely subjective scoring of the attributes and alternatives of the mobility. In this paper, we design a specific model for public transportation mode choice which is capable of integrating subjective scoring with scoring by objective measures such as distance or time. Owing to this purpose, we combine the outranking Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) as a method to evaluate passengers' preferences for tangible and intangible criteria with the fuzzy theory, and the Graphical Analysis for Interactive Aid (GAIA) plane to visualize the interactions between attributes as well as to test the robustness of the results via sensitivity analysis. The contribution of this paper is the constructed integrative method that is less subjective than the well-known models but also keeps the freedom of individual evaluators in expressing their preferences. Moreover, another significant issue of mode choice analysis is the group consideration, which is also refined in the new methodology by taking into account not only the mean of group preferences but also their range. A common characteristic of public surveys, the possible vague responses of the layman pattern is solved with the fuzzy approach to reduce the risk of uncertain scoring. The proposed model acts as a great base for the fuzzy inference system that can facilitate mode choice for passengers within a changing environment. The efficiency of the new methodology is demonstrated through a real-world case study of Budapest city, the obtained results are supporting underground mode service quality and highlighting its impact on citizens' behavior in favor of public transport.
PDRF-Net: a progressive dense residual fusion network for COVID-19 lung CT image segmentation
The lungs of patients with COVID-19 exhibit distinctive lesion features in chest CT images. Fast and accurate segmentation of lesion sites from CT images of patients' lungs is significant for the diagnosis and monitoring of COVID-19 patients. To this end, we propose a progressive dense residual fusion network named PDRF-Net for COVID-19 lung CT segmentation. Dense skip connections are introduced to capture multi-level contextual information and compensate for the feature loss problem in network delivery. The efficient aggregated residual module is designed for the encoding-decoding structure, which combines a visual transformer and the residual block to enable the network to extract richer and minute-detail features from CT images. Furthermore, we introduce a bilateral channel pixel weighted module to progressively fuse the feature maps obtained from multiple branches. The proposed PDRF-Net obtains good segmentation results on two COVID-19 datasets. Its segmentation performance is superior to baseline by 11.6% and 11.1%, and outperforming other comparative mainstream methods. Thus, PDRF-Net serves as an easy-to-train, high-performance deep learning model that can realize effective segmentation of the COVID-19 lung CT images.
AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
An efficient real-time stock prediction exploiting incremental learning and deep learning
Intraday trading is popular among traders due to its ability to leverage price fluctuations in a short timeframe. For traders, real-time price predictions for the next few minutes can be beneficial for making strategies. Real-time prediction is challenging due to the stock market's non-stationary, complex, noisy, chaotic, dynamic, volatile, and non-parametric nature. Machine learning models are considered effective for stock forecasting, yet, their hyperparameters need tuning with the latest market data to incorporate the market's complexities. Usually, models are trained and tested in batches, which smooths the correction process and speeds up the learning. When making intraday stock predictions, the models should forecast for each instance in contrast to the whole batch and learn simultaneously to ensure high accuracy. In this paper, we propose a strategy based on two different learning approaches: incremental learning and Offline-Online learning, to forecast the stock price using the real-time stream of the live market. In incremental learning, the model is updated continuously upon receiving the stock's next instance from the live-stream, while in Offline-Online learning, the model is retrained after each trading session to make sure it incorporates the latest data complexities. These methods were applied to univariate time-series (established from historical stock price) and multivariate time-series (considering historical stock price as well as technical indicators). Extensive experiments were performed on the eight most liquid stocks listed on the American NASDAQ and Indian NSE stock exchanges, respectively. The Offline-Online models outperformed incremental models in terms of low forecasting error.
Generic image application using GANs (Generative Adversarial Networks): A Review
The generative adversarial network (GAN), which has received considerable notice for its outstanding data generating abilities, is one of the most intriguing fields of artificial intelligence study. Large volumes of data are required to develop generalizable deep learning models. GANs are a highly strong class of networks capable of producing believable new pictures from unlabeled source prints and labeled medical imaging data is scarce and costly to produce. Despite GAN's remarkable outcomes, steady training remains a challenge. The goal of this study is to perform a complete evaluation of the GAN-related literature and to present a succinct summary of existing knowledge on GAN, including the theory following it, its intended purpose, potential base model alterations, and latest breakthroughs in the area. This article will aid you in gaining a comprehensive grasp of GAN and provide an overview of GAN and its many model types, as well as common implementations, measurement parameter suggestions, and GAN applications in image processing. It will also go over the several applications of GANs in image processing, as well as their benefits and limitations, as well as its prospective reach.
Tackling over-smoothing in multi-label image classification using graphical convolution neural network
The importance of the graphical convolution network in multi-label classification has grown in recent years due to its label embedding representation capabilities. The graphical convolution network is able to capture the label dependencies using the correlation between labels. However, the graphical convolution network suffers from an over-smoothing problem when the layers are increased in the network. Over-smoothing makes the nodes indistinguishable in the deep graphical convolution network. This paper proposes a normalization technique to tackle the over-smoothing problem in the graphical convolution network for multi-label classification. The proposed approach is an efficient multi-label object classifier based on a graphical convolution neural network that tackles the over-smoothing problem. The proposed approach normalizes the output of the graph such that the total pairwise squared distance between nodes remains the same after performing the convolution operation. The proposed approach outperforms the existing state-of-the-art approaches based on the results obtained from the experiments performed on MS-COCO and VOC2007 datasets. The experimentation results show that pairnorm mitigates the effect of over-smoothing in the case of using a deep graphical convolution network.
Chimp optimization algorithm in multilevel image thresholding and image clustering
Multilevel image thresholding and image clustering, two extensively used image processing techniques, have sparked renewed interest in recent years due to their wide range of applications. The approach of yielding multiple threshold values for each color channel to generate clustered and segmented images appears to be quite efficient and it provides significant performance, although this method is computationally heavy. To ease this complicated process, nature inspired optimization algorithms are quite handy tools. In this paper, the performance of Chimp Optimization Algorithm (ChOA) in image clustering and segmentation has been analyzed, based on multilevel thresholding for each color channel. To evaluate the performance of ChOA in this regard, several performance metrics have been used, namely, Segment evolution function, peak signal-to-noise ratio, Variation of information, Probability Rand Index, global consistency error, Feature Similarity Index and Structural Similarity Index, Blind/Referenceless Image Spatial Quality Evaluatoe, Perception based Image Quality Evaluator, Naturalness Image Quality Evaluator. This performance has been compared with eight other well known metaheuristic algorithms: Particle Swarm Optimization Algorithm, Whale Optimization Algorithm, Salp Swarm Algorithm, Harris Hawks Optimization Algorithm, Moth Flame Optimization Algorithm, Grey Wolf Optimization Algorithm, Archimedes Optimization Algorithm, African Vulture Optimization Algorithm using two popular thresholding techniques-Kapur's entropy method and Otsu's class variance method. The results demonstrate the effectiveness and competitive performance of Chimp Optimization Algorithm.
A novel normalization algorithm to facilitate pre-assessment of Covid-19 disease by improving accuracy of CNN and its FPGA implementation
COVID-19 is still a fatal disease, which has threatened all people by affecting the human lungs. Chest X-Ray or computed tomography imaging is commonly used to make a fast and reliable medical investigation to detect the COVID-19 virus. These medical images are remarkably challenging because it is a full-time job and prone to human errors. In this paper, a new normalization algorithm that consists of Mean-Variance-Softmax-Rescale (MVSR) processes respectively is proposed to provide facilitation pre-assessment and diagnosis Covid-19 disease. In order to show the effect of MVSR normalization technique, the algorithm of proposed method is applied to chest X-ray and Sars-Cov-2 computed tomography images dataset. The normalized X-ray images with MVSR are used to recognize Covid-19 virus via Convolutional Neural Network (CNN) model. At the implementation stage, the MVSR algorithm is executed on MATLAB environment, then all the arithmetic operations of the MVSR normalization are coded in VHDL with the help of fixed-point fractional number representation format on FPGA platform. The experimental platform consists of Zynq-7000 Development FPGA Board and VGA monitor to display the both original and MVSR normalized chest X-ray images. The CNN model is constructed and executed using Anaconda Navigator interface with python language. Based on the results of this study, infections of Covid-19 disease can be easily diagnosed with MVSR normalization technique. The proposed MVSR normalization technique increased the classification accuracy of the CNN model from 83.01, to 96.16% for binary class of chest X-ray images.
Nature-inspired optimization algorithms and their significance in multi-thresholding image segmentation: an inclusive review
Multilevel Thresholding (MLT) is considered as a significant and imperative research field in image segmentation that can efficiently resolve difficulties aroused while analyzing the segmented regions of multifaceted images with complicated nonlinear conditions. MLT being a simple exponential combinatorial optimization problem is commonly phrased by means of a sophisticated objective function requirement that can only be addressed by nondeterministic approaches. Consequently, researchers are engaging Nature-Inspired Optimization Algorithms (NIOA) as an alternate methodology that can be widely employed for resolving problems related to MLT. This paper delivers an acquainted review related to novel NIOA shaped lately in last three years (2019-2021) highlighting and exploring the major challenges encountered during the development of image multi-thresholding models based on NIOA.
Super-forecasting the 'technological singularity' risks from artificial intelligence
This article investigates cybersecurity (and risk) in the context of 'technological singularity' from artificial intelligence. The investigation constructs multiple risk forecasts that are synthesised in a new framework for counteracting risks from artificial intelligence (AI) itself. In other words, the research in this article is not just concerned with securing a system, but also analysing how the system responds when (internal and external) failure(s) and compromise(s) occur. This is an important methodological principle because not all systems can be secured, and totally securing a system is not feasible. Thus, we need to construct algorithms that will enable systems to continue operating even when parts of the system have been compromised. Furthermore, the article forecasts emerging cyber-risks from the integration of AI in cybersecurity. Based on the forecasts, the article is concentrated on creating synergies between the existing literature, the data sources identified in the survey, and forecasts. The forecasts are used to increase the feasibility of the overall research and enable the development of novel methodologies that uses AI to defend from cyber risks. The methodology is focused on addressing the risk of AI attacks, as well as to forecast the value of AI in defence and in the prevention of AI rogue devices acting independently.
Deep recurrent Gaussian Nesterovs recommendation using multi-agent in social networks
Due to increasing volume of big data the high volume of information in Social Network put a stop to users from acquiring serviceable information intelligently so many recommendation systems have emerged. Multi-agent Deep Learning gains rapid attraction, and the latest accomplishments address problems with real-world complexity. With big data precise recommendation has yet to be answered. In proposed work Deep Recurrent Gaussian Nesterov's Optimal Gradient (DR-GNOG) that combines deep learning with a multi-agent scenario for optimal and precise recommendation. The DR-GNOG is split into three layers, an input layer, two hidden layers and an output layer. The tweets obtained from the users are provided to the input layer by the Tweet Accumulator Agent. Then, in the first hidden layer, Tweet Classifier Agent performs optimized and relevant tweet classification by means of Gaussian Nesterov's Optimal Gradient model. In the second layer, a Deep Recurrent Predictive Recommendation model is designed to concentrate on the vanishing gradient issue arising due to updated tweets obtained from same user at different time instance. Finally, with the aid of hyperbolic activation function in the output layer, building block of the predictive recommendation is obtained. In the experimental study the proposed method is found better than existing GANCF and Bootstrapping method 13-21% in case of recommendation accuracy, 22-32% better in recommendation time and 15-22% better in recall rate.
FocusCovid: automated COVID-19 detection using deep learning with chest X-ray images
COVID-19 is an acronym for coronavirus disease 2019. Initially, it was called 2019-nCoV, and later International Committee on Taxonomy of Viruses (ICTV) termed it SARS-CoV-2. On 30th January 2020, the World Health Organization (WHO) declared it a pandemic. With an increasing number of COVID-19 cases, the available medical infrastructure is essential to detect the suspected cases. Medical imaging techniques such as Computed Tomography (CT), chest radiography can play an important role in the early screening and detection of COVID-19 cases. It is important to identify and separate the cases to stop the further spread of the virus. Artificial Intelligence can play an important role in COVID-19 detection and decreases the workload on collapsing medical infrastructure. In this paper, a deep convolutional neural network-based architecture is proposed for the COVID-19 detection using chest radiographs. The dataset used to train and test the model is available on different public repositories. Despite having the high accuracy of the model, the decision on COVID-19 should be made in consultation with the trained medical clinician.
Variable step-size evolving participatory learning with kernel recursive least squares applied to gas prices forecasting in Brazil
A prediction model is an indispensable tool in business, helping to make decisions, whether in the short, medium, or long term. In this context, the implementation of machine learning techniques in time series forecasting models has a notorious relevance, as information processing and efficient and dynamic knowledge uncovering are increasingly demanded. This paper develops a model called Variable step-size evolving Participatory Learning with Kernel Recursive Least Squares, VS-ePL-KRLS, applied to the forecast of weekly prices for S500 and S10 diesel oil, at the Brazilian level, for biweekly and monthly horizons. The presented model demonstrates a better accuracy compared with analogous models in the literature, without loss of computational performance for all time series analyzed.
A novel interval type-2 fuzzy Kalman filtering and tracking of experimental data
In this paper, a methodology for design of fuzzy Kalman filter, using interval type-2 fuzzy models, in discrete time domain, via spectral decomposition of experimental data, is proposed. The adopted methodology consists of recursive parametric estimation of local state space linear submodels of interval type-2 fuzzy Kalman filter for tracking and forecasting of the dynamics inherited to experimental data, using an interval type-2 fuzzy version of Observer/Kalman Filter Identification (OKID) algorithm. The partitioning of the experimental data is performed by interval type-2 fuzzy Gustafson-Kessel clustering algorithm. The interval Kalman gains in the consequent proposition of interval type-2 fuzzy Kalman filter are updated according to unobservable components computed by recursive spectral decomposition of experimental data. Computational results illustrate the efficiency of proposed methodology for filtering and tracking the time delayed state variables of Chen's chaotic attractor in a noisy environment, and experimental results illustrate its applicability for adaptive and real time forecasting the dynamic spread behavior of novel Coronavirus 2019 (COVID-19) outbreak in Brazil.
DBF-Net: a semi-supervised dual-task balanced fusion network for segmenting infected regions from lung CT images
Accurate segmentation of infected regions in lung computed tomography (CT) images is essential to improve the timeliness and effectiveness of treatment for coronavirus disease 2019 (COVID-19). However, the main difficulties in developing of lung lesion segmentation in COVID-19 are still the fuzzy boundary of the lung-infected region, the low contrast between the infected region and the normal trend region, and the difficulty in obtaining labeled data. To this end, we propose a novel dual-task consistent network framework that uses multiple inputs to continuously learn and extract lung infection region features, which is used to generate reliable label images (pseudo-labels) and expand the dataset. Specifically, we periodically feed multiple sets of raw and data-enhanced images into two trunk branches of the network; the characteristics of the lung infection region are extracted by a lightweight double convolution (LDC) module and fusiform equilibrium fusion pyramid (FEFP) convolution in the backbone. According to the learned features, the infected regions are segmented, and pseudo-labels are made based on the semi-supervised learning strategy, which effectively alleviates the semi-supervised problem of unlabeled data. Our proposed semi-supervised dual-task balanced fusion network (DBF-Net) creates pseudo-labels on the COVID-SemiSeg dataset and the COVID-19 CT segmentation dataset. Furthermore, we perform lung infection segmentation on the DBF-Net model, with a segmentation sensitivity of 70.6% and specificity of 92.8%. The results of the investigation indicate that the proposed network greatly enhances the segmentation ability of COVID-19 infection.
Vaccination and isolation based control design of the COVID-19 pandemic based on adaptive neuro fuzzy inference system optimized with the genetic algorithm
The study of the COVID-19 pandemic is of pivotal importance due to its tremendous global impacts. This paper aims to control this disease using an optimal strategy comprising two methods: isolation and vaccination. In this regard, an optimized Adaptive Neuro-Fuzzy Inference System (ANFIS) is developed using the Genetic Algorithm (GA) to control the dynamic model of the COVID-19 termed SIDARTHE (Susceptible, Infected, Diagnosed, Ailing, Recognized, Threatened, Healed, and Extinct). The number of diagnosed and recognized people is reduced by isolation, and the number of susceptible people is reduced by vaccination. The GA generates optimal control efforts related to the random initial number of each chosen group as the input data for ANFIS to train Takagi-Sugeno (T-S) fuzzy structure coefficients. Also, three theorems are presented to indicate the positivity, boundedness, and existence of the solutions in the presence of the controller. The performance of the proposed system is evaluated through the mean squared error (MSE) and the root-mean-square error (RMSE). The simulation results show a significant decrease in the number of diagnosed, recognized, and susceptible individuals by employing the proposed controller, even with a 70% increase in transmissibility caused by various variants.
Evolving fuzzy neural classifier that integrates uncertainty from human-expert feedback
Evolving fuzzy neural networks are models capable of solving complex problems in a wide variety of contexts. In general, the quality of the data evaluated by a model has a direct impact on the quality of the results. Some procedures can generate uncertainty during data collection, which can be identified by experts to choose more suitable forms of model training. This paper proposes the integration of expert input on labeling uncertainty into evolving fuzzy neural classifiers (EFNC) in an approach called . Uncertainty is considered in class label input provided by experts, who may not be entirely confident in their labeling or who may have limited experience with the application scenario for which the data is processed. Further, we aimed to create highly interpretable fuzzy classification rules to gain a better understanding of the process and thus to enable the user to elicit new knowledge from the model. To prove our technique, we performed binary pattern classification tests within two application scenarios, cyber invasion and fraud detection in auctions. By explicitly considering class label uncertainty in the update process of the EFNC-U, improved accuracy trend lines were achieved compared to fully (and blindly) updating the classifiers with uncertain data. Integration of (simulated) labeling uncertainty smaller than 20% led to similar accuracy trends as using the original streams (unaffected by uncertainty). This demonstrates the robustness of our approach up to this uncertainty level. Finally, interpretable rules were elicited for a particular application (auction fraud identification) with reduced (and thus readable) antecedent lengths and with certainty values in the consequent class labels. Additionally, an average expected uncertainty of the rules were elicited based on the uncertainty levels in those samples which formed the corresponding rules.
