An Experiment on Network Density and Sequential Learning: Data and Code

Published: 30-04-2021| Version 1 | DOI: 10.17632/cxfvnn4kpd.1
Krishna Dasaratha,


Data and code for the article "An Experiment on Network Density and Sequential Learning." We conduct a sequential social-learning experiment where subjects each guess a hidden state based on private signals and the guesses of a subset of their predecessors. A network determines the observable predecessors, and we compare subjects' accuracy on sparse and dense networks. Accuracy gains from social learning are twice as large on sparse networks compared to dense networks. Models of naive inference where agents ignore correlation between observations predict this comparative static in network density, while the finding is difficult to reconcile with rational-learning models.


Steps to reproduce

NaiveCalculations.m produces the numerical values in Figure 1 and Table 4. LowerBoundingRational.m produces Table 3. code_density_experiment.R reproduces all other tables and figures in the article.