Filter Results
8 results
- Data for: Distance-based customer detection in fake follower marketsThe dataset includes legitimate users and customers of fake follower markets. Please Read Readme.txt file that has an description of each files. The files in the dataset includes the information about labled users, follower relationship, following relationship, and follower distance from users. Included files are legitimate_id.txt, buyer_id.txt, ffollower_id.txt, follower.txt, followee.txt, legitimate_distance.txt, buyer_distance.txt, and Readme.txt
- Dataset
- uiHRDCuiHRDC (universal indexes for Highly Repetitive Document Collections) is a replication framework licensed under the GNU Lesser General Public License v2.1 (GNU LGPL). It includes all the required elements to reproduce the main experiments of the paper [1], including datasets, query patterns, source code and scripts. The general structure of the uiHRDC repository includes: i) a directory benchmark which contains a LATEX formatted report and a script that will collect all the data files resulting from running all the experiments and will generate a PDF report with all the most relevant figures; ii) a directory data, which includes the text collections (7z compressed), and the query patterns. iii) directories indexes and self-indexes that contain the source code for each indexing alternative, and scripts that permit to run all the experiments for each technique (it includes the construction of each compressed index of interest (using a builder program) and then performing both locate and extract operations over that index (using the corresponding searcher program). Each experiment will output relevant data to a results-data file); and iv) a script doAll.sh that will drive all the process of decompressing the source collections; compiling the sources for each index and running the experiments with it; and finally, generating the final report. [1] F. Claude, A. Fariña, M. A. Martínez-Prieto, and G. Navarro. Universal Indexes for Highly Repetitive Document Collections. Information Systems, 61:1–23, 2016.
- Dataset
- JSON Datasets for Exploratory OLAPThese datasets has been used to evaluate the EXODuS approach: EXploratory OLAP over Document Stores. - The games dataset has been collected by Sports Reference LLC. It contains around 32K nested documents representing NBA games in the period 1985-2013. Each document represents a game between two teams with at least 11 players each. It contains 47 attributes; 40 of them are numeric and represent team and player results. - The DBLP dataset contains 2M documents scraped from DBLP in XML format and converted into JSON. Documents are flat and represent eight kinds of publications including conference proceedings, journal articles, books, thesis, etc. The third portion of the dataset represent author pages, containing half the number of fields compared to other kinds. So, documents have shared attributes such as title, author, type, year and unshared ones such as journal and booktitle. - The Twitter dataset contains 2M tweets scraped from the Twitter API. Each document represents a tweet message and its metadata, which contains some nested objects: a user object that represent the author of the tweet, a place object that gives its location and a retweet object if it is a reply. The dataset is heterogeneous and mixes between tweets and documents of an API call for tweet deletes. The sources of the datasets are listed in the Related links Section.
- Dataset
- HESML_vs_SML: scalability and performance benchmarks between the HESML V1R2 and SML 0.9 semantic measures librariesThis dataset introduces a companion reproducibility Java console program, called HESML_vs_SML_test.jar, of the work introduced by Lastra-Díaz and García-Serrano [1]. This latter work introduces the Half-Edge Semantic Measures Library (HESML), and carries-out an experimental survey between HESML V1R2, the Semantic Measures Library (SML) 0.9 [2] and the WNetSS [4] semantic measures libraries. The HESML_vs_SML_test.jar program runs the set of performance and scalability benchmarks detailed in [1] and generates the figures and tables of results reported in the aforementioned work, which are also enclosed as complementary files of this dataset (see files below). Licensing note: The 'HESML_vs_SML_test.jar' program is based on the HESML V1R2 [3], SML 0.9 [2] and WNetSS [4] semantic measures libraries, and it includes these libraries in its distribution, as well as WordNet 3.0 [6] and the SimLex665 [5] dataset. Thus, if you use this dataset, you should also cite the works related to these resources. References: [1] Lastra-Díaz, J. J., and García-Serrano, A. (2016). HESML: a scalable ontology-based semantic similarity measures library with a set of reproducible experiments and a replication dataset. To appear in Information Systems Journal. [2] Harispe, S., Ranwez, S., Janaqi, S., and Montmain, J. (2014). The Semantic Measures Library: Assessing Semantic Similarity from Knowledge Representation Analysis. In E. Métais, M. Roche, & M. Teisseire (Eds.), Proc. of the 19th International Conference on Applications of Natural Language to Information Systems (NLDB 2014) (Vol. 8455, pp. 254–257). Montpelier, France: Springer. http://dx.doi.org/10.1007/978-3-319-07983-7_37 [3] Lastra-Díaz, J. J., & García-Serrano, A. (2016). HESML V1R2 Java software library of ontology-based semantic similarity measures and information content models. Mendeley Data, v2. https://doi.org/10.17632/t87s78dg78.2 [4] Ben Aouicha, M., Taieb, M. A. H., and Ben Hamadou, A. (2016). SISR: System for integrating semantic relatedness and similarity measures. Soft Computing, 1–25. http://dx.doi.org/10.1007/s00500-016-2438-x [5] Hill, F., Reichart, R., & Korhonen, A. (2015). SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation. Computational Linguistics, 41(4), 665–695. http://dx.doi.org/10.1162/COLI_a_00237 [6] Miller, G. A. (1995). WordNet: A Lexical Database for English. Communications of the ACM, 38(11), 39–41. http://dx.doi.org/10.1145/219717.219748
- Dataset
- WordNet-based word similarity reproducible experiments based on HESML V1R1 and ReproZipThis dataset is provided as supplementary material of the paper by Lastra-Díaz, J. J., & García-Serrano, A. (2016). HESML: a scalable ontology-based semantic similarity measures library with a set of reproducible experiments and a replication dataset. Information Systems. This dataset contains a ReproZip reproducible experiment file, called "HESMLv1r1_reproducible_exps.rpz", which allows the experimental surveys on word similarity on WordNet introduced in the three papers below to be reproduced exactly. [1] Lastra-Díaz, J. J., & García-Serrano, A. (2015). A novel family of IC-based similarity measures with a detailed experimental survey on WordNet. Engineering Applications of Artificial Intelligence Journal, 46, 140–153. http://dx.doi.org/10.1016/j.engappai.2015.09.006 [2] Lastra-Díaz, J. J., & García-Serrano, A. (2015). A new family of information content models with an experimental survey on WordNet. Knowledge-Based Systems, 89, 509–526. http://dx.doi.org/10.1016/j.knosys.2015.08.019 [3] Lastra-Díaz, J. J., & García-Serrano, A. (2016). A refinement of the well-founded Information Content models with a very detailed experimental survey on WordNet (No. TR-2016-01). NLP and IR Research Group. ETSI Informática. Universidad Nacional de Educación a Distancia (UNED). http://e-spacio.uned.es/fez/view/bibliuned:DptoLSI-ETSI-Informes-Jlastra-refinement
- Dataset
- Reproducible experiments on dynamic resource allocation in cloud data centersIn Wolke et al. we compare the efficiency of different resource allocation strategies experimentally. We focused on dynamic environments where virtual machines need to be allocated and deallocated to servers over time. In this companion paper, we describe the simulation framework and how to run simulations to replicate experiments or run new experiments within the framework.
- Dataset
- Results: More than bin packing: Dynamic resource allocation strategies in cloud data centersData produced by simulations and experiments that were used in our paper "More than bin packing: Dynamic resource allocation strategies in cloud data centers" (Information Systems)
- Dataset
- Replication Data for: Data pre-processing pipeline generation for AutoETLData pre-processing plays a key role in a data analytics process (e.g., applying a classification algorithm on a predictive task). It encompasses a broad range of activities that span from correcting errors to selecting the most relevant features for the analysis phase. There is no clear evidence, or rules defined, on how pre-processing transformations impact the final results of the analysis. The problem is exacerbated when transformations are combined into pre-processing pipeline prototypes. Data scientists cannot easily foresee the impact of pipeline prototypes and hence require a method to discriminate between them and find the most relevant ones (e.g., with highest positive impact) for their study at hand. Once found, these prototypes can be instantiated and optimized e.g., using Bayesian Optimization. In this work, we study the impact of transformations when chained together into prototypes, and the impact of transformations when instantiated via various operators. We develop and scrutinize a generic method that allows to generate pre-processing pipelines, as a step towards AutoETL. We make use of rules that enable the construction of prototypes (i.e., define the order of transformations), and rules that guide the instantiation of the transformations inside the prototypes (i.e., define the operator for each transformation). The optimization of our effective pipeline prototypes provide results that compared to an exhaustive search, get 90% of the predictive accuracy in the median, but with a time cost that is 24 times smaller.
- Dataset