In this work, we present a dataset, which is collected from English-language Wikipedia and Twitter users. In contrast to the corpus of existing studies, we focused on the corpus of Wikipedia, which contains many articles and covers a multitude of topics. We performed preprocessing, including lemmatizing, converting slang into common English, and removing stop words. Then the articles in the English-language version of Wikipedia were converted to plain text. The Wiki markup was processed and a text version of each item was returned. By using Tweepy  to access the Twitter API, we collected the data from Twitter users. We crawled the list of each user account’s followers. Next, we obtained the most recent tweets of each user in the study.