Filter Results
25 results
- Data for: Feature-Fusion-Kernel-Based Gaussian Process Model for Probabilistic Long-Term Load ForecastingThe main file includes the training data and incremental data for each task. The template file includes the template for each task. The contestants have to submit the probabilistic forecasts following the exact format and number of rows and columns as shown in the template file. In each task, a benchmark is also provided to further illustrate the formatting. The benchmark in the load forecasting track was created by taking the same month last year as the predicted load across all quantiles. This is a benchmark, which takes a forecast and expands it to 99 quantiles. The contestants do NOT have to provide the same value across all quantiles.
- Study on Image Retrieval based on Image Texture and Color Statistical ProjectionThis article presents an image texture and hue statistical projection based retrieval. First the image is converted to HSI color model, the gray value of the image extraction, and Robert algorithm to extract the texture, then the image is divided into blocks and extracts the main color block, the main color image blocks are respectively projected in the horizontal and vertical direction of 2, get 2 projection histogram, the 2 projection histograms of the first three order center extraction distance and Robert algorithm as the features of texture, image similarity calculation. Make a very full pave the way for future Canny edge processing algorithm research of image retrieval.
- Codes and Data for (Generalized entropy based possibilistic fuzzy C-Means for clustering noisy data and its convergence proof)Dear Researcher, Thank you for using this code and datasets. I explain how GEPFCM code related to my paper "Generalized entropy based possibilistic fuzzy C-Means for clustering noisy data and its convergence proof" published in Neurocomputing, works. The main datasets mentioned in the paper together with GEPFCM code are included. If there is any question, feel free to contact me at: bas_salaraskari@yahoo.com s_askari@aut.ac.ir Regards, S. Askari
- Generalized entropy based possibilistic fuzzy C-meansDear Researcher, Thank you for using this code and datasets. I explain how GEPFCM code related to my paper "Generalized entropy based possibilistic fuzzy C-Means for clustering noisy data and its convergence proof" published in Neurocomputing, works. The main datasets mentioned in the paper together with GEPFCM code are included. If there is any question, feel free to contact me at: bas_salaraskari@yahoo.com s_askari@aut.ac.ir Regards, S. Askari Guidelines for GEPFCM algorithm: 1. Open the file GEPFCM Code using MATLAB. This is relaxed form of the algorithm to handle noisy data. 2. Enter or paste name of the dataset you wish to cluster in line 15 after "load". It loads the dataset in the workplace. 3. For details of the parameters cFCM, cPCM, c1E, c2E, eta, and m, please read the paper. 4. Lines 17 and 18: "N" is number of data vectors and "D" is number of independent variables. 5. Line 26: "C" is number of clusters. To input your own desired value for number of clusters, "uncomment" this line and then enter the value. Since the datasets provided here, include "C", this line is "comment". 6. Line 28: "ruopt" is optimal value of ρ discussed in equation 13 of the paper. To enter your own value of ρ, "uncomment" this line. Since the datasets provided here, include "ruopt ", this line is "comment". 7. If line 50 is "comment", covariance norm (Mahalanobis distance) is use and if it is "uncomment", identity norm (Euclidean distance) is used. 8. When you run the algorithm, first FCM is applied to the data. Cluster centers calculated by FCM initialize PFCM. Then PFCM is applied to the data and cluster centers computed by PFCM initialize GEPFCM. Finally, GEPFCM is applied to the data. 9. For two-dimensional plot, "uncomment" lines 419-421 and "comment" lines 423-425. For three-dimensional plot, "comment" lines 419-421 and "uncomment" lines 423-425. 10. To run the algorithm, press Ctrl Enter on your keyboard. 11. For your own dataset, please arrange the data as the datasets described in the MS word file "Read Me".
- Data for: Multi-label Thresholding for Cost-sensitive ClassificationThreshold choice methods were implemented in Java on the Meka platform (version 1.7.5).
- Data for: Detection of cervical cancer cells based on strong feature CNN-SVM network1. Due to the large number of pictures, we just selected some of them for display. 2. Original negative sample and Original positive sample are raw data collected from our cooperative unit. We used 100 cervical liquid based cell slides in total, for the sake of simplicity, we have selected a positive and a negative sample for publication, so that you can see the appearance of our raw data which has not been processed. 3. Processed training material are the images which have been processed after binarization, image segmentation and image classification. This folder contains 400 epithelial cells , they are the images of single cells after a series of processes. epithelial cells has been divided into two categories including 200 cancerous epithelial cells and 200 normal epithelial cells, as the names you can see, these are the typical samples we used in the paper.
- Data for: Integrating Adaptive Moving Window and Just-in-Time Learning Paradigms for Soft-Sensor DesignMATLAB .mat file of dynamic simulations of a continuous stirred tank reactor coupled with an external heat exchanger on MATLAB, adapted from the modeling equations in S. Yoon, J.F. MacGregor, Fault diagnosis with multivariate statistical models part I: Using steady state fault signatures, J. Process Control. 11 (2001) 387–400. Data consists of 20 repetitions of 700 samples of 57 predictors (including lagged process variable measurements) and 700 samples of a single response variable for 8 different concept drift models.
- Data for: Person Re-Identification From Virtuality to Reality via Modality Invariant Adversarial MechanismThe code for the proposed MIAM method.
- Data for: Fine-Grained Visual Categorization of Butterfly Specimens at Sub-species Level Via a Convolutional Neural Network with skip-connectionsFor performance evaluation, a total of 24,836 images of butterfly specimen spanning 56 sub-species were acquired as benchmark dataset for their strong similarity with subordinate categories. The camera used is Canon EOS 5D Mark IV and the shooting distance was three to seven cm depending on the worm size. The image format was JPEG and each one was a 24-bit color bitmap. Each image was classified into one corresponding ground truth category with the help of entomology experts. It is an interesting but challenging dataset for performance verification of fine-grained visual categorization of butterfly specimens.
- Data for: Portfolio Optimization of Digital Currency: A Deep Reinforcement Learning with Multidimensional Attention Gating MechanismThe code of the paper.
1