Dataset of Malicious and Benign Webpages

Published: 02-05-2020| Version 2 | DOI: 10.17632/gdx3pkwp47.2


The dataset contains extracted attributes from websites that can be used for Classification of webpages as malicious or benign. The dataset also includes raw page content including JavaScript code that can be used as unstructured data in Deep Learning or for extracting further attributes. The data has been collected by crawling the Internet using MalCrawler [1]. The labels have been verified using the Google Safe Browsing API [2]. Attributes have been selected based on their relevance [3]. The details of dataset attributes is as given below: 'url' - The URL of the webpage. 'ip_add' - IP Address of the webpage. 'geo_loc' - The geographic location where the webpage is hosted. 'url_len' - The length of URL. 'js_len' - Length of JavaScript code on the webpage. 'js_obf_len - Length of obfuscated JavaScript code. 'tld' - The Top Level Domain of the webpage. 'who_is' - Whether the WHO IS domain information is compete or not. 'https' - Whether the site uses https or http. 'content' - The raw webpage content including JavaScript code. 'label' - The class label for benign or malicious webpage. Python code for extraction of the above listed dataset attributes is attached. The Visualisation of this dataset and it python code is also attached. This visualisation can be seen online on Kaggle [5].


Steps to reproduce

1. Crawl Internet using MalCrawler [1]. 2. Clean data using customised Python code. 3. Extract URL, URL's length and HTTPS status using customised Python code. ExtractTLD attribute using the tld library. 4. Compute the geographic location using the GeoIP Database [3]. 5. Put the Who IS domain information using the WHOIS API [4]. 6. Extract JavaScript from web content. Compute the length of JavaScript and obfuscated JavaScript code, if present. 7. Put training labels for malicious/benign webpages using Google Safe Browsing API [2].