Filter Results
880 results
Pattern mining in data-sets is a Computer Science field that is gaining more popularity in the current years. A recent category that has appeared in this field is known as fuzzy-temporal gradual pattern mining, which allow for extraction of correlations between attributes of a data-set separated by an approximated time lag. For instance, a fuzzy-temporal gradual pattern can be of the form “the more K, the less L almost 7 months later”. However, it is possible for such patterns to change at different time intervals; for instance, after 10 years the pattern may evolve to be “the more K, the more L almost 7 months later”. Several experiments have been led to show that this phenomenon, referred to as emerging trends, is a possible occurrence when frequent patterns are extracted from two data-sets with the same attributes. An emerging pattern is a pattern that appears more in one data-set than in another. In this paper, we argue that emerging fuzzy-temporal gradual patterns can be captured within the same data-set. We show that by proposing an approach for mining these patterns and testing it on a real data-set. Our results suggest that our approach can be integrated into an Internet of Things model in order to enable it to automatically stabilize its power consumption.
Data Types:
  • Software/Code
As the use of recommender systems becomes generalized in society, the interest in varying the orientation of their recommendations increases. There are shilling attacks strategies that introduce malicious profiles in collaborative filtering recommender systems in order to promote the own products or services, or to discredit those of the competition. Academic research against shilling attacks has been focused in statistical approaches to detect unusual patterns in user ratings. Nowadays there is a growing research area focused on the design of robust machine learning methods to neutralize malicious profiles inserted into the system. This paper proposes an innovative robust method, based on matrix factorization, to neutralize shilling attacks. Our method obtains the reliability value associated to each prediction of a user to an item. Monitoring unusual reliability variations in the items prediction we can avoid promoting shilling predictions to erroneous recommendations. This paper open provides more than thirteen thousand individual experiments involving a wide range of attack strategies, both push and nuke, in order to test the proposed approach. Results show that the proposed method is able to neutralize most of the existing attacks; its performance only decreases in the not relevant situations: when the attack size is not large enough to affect effectively the recommendations provided by the system. This Code Ocean capsule contains the code that has been used to run the experiments of the related publication.
Data Types:
  • Software/Code
The capsule contains four files including three matlab files and one python file. Out of three matlab files, two are used to generate random aggregate generations for crushed and round RCA aggregate systems. Remaining matlab file is used to analyze the image obtained from after the aggregate generation. The python code is used to generate the finite element model for the command console in Diana Interactive Environment.
Data Types:
  • Software/Code
MATLAB code + demo to reproduce results for "Sparse Principal Component Analysis with Preserved Sparsity". This code calculates the principal loading vectors for any given high-dimensional data matrix. The advantage of this method over existing sparse-PCA methods is that it can produce principal loading vectors with the same sparsity pattern for any number of principal components. Please see Readme.md for more information.
Data Types:
  • Software/Code
Abstract of the paper: Sparse Coding Dictionary (SCD) learning is to decompose a given hyperspectral image into a linear combination of a few bases. In a natural scene, because there is an imbalance in the abundance of materials, the problem of learning a given material well is directly proportional to its abundance in the training scene. By a random selection of pixels to train a given dictionary, the probability of bases learning a given material is proportional to its distribution in the scene. We propose to use SOMP residue for sample selection with each iteration for a more robust or 'more complete' learning. Experiments show that the proposed method learns from both background and trace materials accurately with over 0.95 in Pearson correlation coefficient. Furthermore, the proposed implementation has resulted in considerable improvements in Target Detection with Adaptive Cosine Estimator (ACE).
Data Types:
  • Software/Code
Neural network force field (NNFF) is a method for performing regression on atomic structure – force relationships, bypassing expensive quantum mechanics calculation which prevents the execution of long ab-initio quality molecular dynamics simulations. However, most NNFF methods for complex multi-element atomic systems indirectly predict atomic force vectors by exploiting just atomic structure rotation-invariant features and the network–feature spatial derivatives which are computationally expensive. We develop a staggered NNFF architecture exploiting both rotation-invariant and covariant features separately to directly predict atomic force vectors without using spatial derivatives, thereby reducing expensive structural feature calculation by ~180–480×. This acceleration enables us to develop NNFF which directly predicts atomic forces in complex ternary and quaternary–element extended systems comprised of long polymer chains, amorphous oxide, and surface chemical reactions. The staggered rotation-invariant-covariant architecture described here can also directly predict complex covariant vector outputs from local physical structures in domains beyond computational material science.
Data Types:
  • Software/Code
Benchmark for Volt/VAR control of power distribution networks. The benchmark illustrates the suboptimality of purely local (fully decentralized) volt/VAR strategies when compared to networked (distributed) strategies. The code generates the main figures of the associated publication.
Data Types:
  • Software/Code
PIETOOLS uses transforms a Time-delay System to the Partial-Integral representation and then uses the standard PIETOOLS executives for: stability analysis; a dual version of stability analysis; Hinf-gain analysis; A dual version of Hinf-gain analysis; Hinf-optimal controller synthesis; Hinf-optimal observer synthesis.
Data Types:
  • Software/Code
Source code for the paper entitled "Deep Transfer Learning for Intelligent Cellular Traffic Prediction Based on Cross-Domain Big Data". We proposed a deep learning framework for wireless traffic prediction called STCNet.
Data Types:
  • Software/Code
Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy \emph{Choquet integral} (ChI), a powerful nonlinear aggregation function, can be represented as a multi-layer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic gradient descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables eXplainable AI (XAI). Synthetic validation experiments are provided and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy and our previously established XAI indices shed light on the quality of our data, model, and its decisions.
Data Types:
  • Software/Code
6