Causal Feature Importance Dataset for Urban Traffic Level of Service Across Four U.S. Metropolitan Areas

Published: 3 April 2026| Version 2 | DOI: 10.17632/tbdn8yhs83.2
Contributor:
Omid Mansorihanis

Description

This dataset supports the study 'What Really Drives Urban Traffic Congestion? A Causal Feature Importance Analysis Across Four Major U.S. Metropolitan Areas' submitted to Journal of Transport Geography. The dataset contains the processed analytical feature matrix for 134,530 census blocks across Chicago, Houston, Los Angeles, and New York City. Each block record includes 46 causally upstream predictor features spanning six thematic categories (Built Environment, Accessibility/Network, Safety & Environment, Demographics, Mode Choice, and Land Use) along with the outcome variables (Congestion Index and three-class Level of Service classification). All features were derived from five primary sources: TomTom GPS probe traffic data (2024 AM peak), US Census Bureau Decennial 2020 and ACS 5-Year 2019-2023 estimates, city Open Data Portal building footprints and parcel land use records, OpenStreetMap and city street network centerlines, and EPA AirNow PM2.5 monitoring and state DOT crash records. Twenty-three circular and endogenous variables were excluded prior to model training as described in the accompanying manuscript. The Python analysis code (scikit-learn, XGBoost, LightGBM) for model training, evaluation, and feature importance extraction is included.

Files

Steps to reproduce

1. Load the four city CSV files (Chicago, Houston, Los Angeles, New York) containing the 134,530-block feature matrix with 46 causal predictor variables and outcome variables (Congestion Index, LOS_3Class). 2. Run the preprocessing script: remove 333 blocks with missing targets, apply median imputation (fit on training data only), label-encode the classification target (Good=0, Medium=1, Poor=2), and perform stratified 80/20 train-test split with random_state=42. 3. Train six models (Linear Regression, Logistic Regression, Ridge Regression, Random Forest, Gradient Boosting, XGBoost, LightGBM) on both regression (CI target) and classification (LOS_3Class target) tasks using the hyperparameters specified in Table 4 of the manuscript. 4. Evaluate on the held-out test set using R², RMSE, MAE (regression) and accuracy, precision, recall, F1 (classification). 5. Extract Gini/MDI feature importance scores from the trained Gradient Boosting and Random Forest models. 6. Repeat steps 2-5 separately for each city's subset to produce individual city models. All code is provided in the included Python notebooks and runs in Google Colab with scikit-learn, xgboost, and lightgbm.

Institutions

Categories

Urban Planning, Transportation Engineering, GIS Database, Applied Machine Learning

Licence