Cognitive decline after subthalamic deep brain stimulation in Parkinson’s disease is preceded by pre-surgery executive deficit: An observational longitudinal study

Published: 18 October 2021| Version 1 | DOI: 10.17632/dmf94j4shn.1
Contributors:
,
,

Description

Bayesian multilevel models of long-term post-surgery cognitive decline after subthalamic nucleus deep brain stimulation (STN DBS) in Parkinson's disease (PD). Files include: (A) Models reported in the main text ("models with weakly informative priors" folder), (B) Models used for sensitivity analysis reported in supplemental materials ("models with flat priors" folder), (C) R scripts to calculate prior predictive check and generate Tables 5 and 6 from the main text ("R scripts" folder). The study investigated which pre-surgery cognitive profile characteristics are predictive of post-surgery cognitive decline in PD patients treated with STN DBS. Cognitive decline is a severe non-motor complication in PD that can significantly reduce benefits from STN DBS treatment. In the present study, we demonstrated that pre-surgery executive deficit can predict faster post-surgery decline in screening for dementia signs (according to Mattis Dementia Rating Scale, DRS-2). The sample size of this study was (respectable) 106 patients followed for 3-11 years after STN DBS surgery. Majority of patients had at least three assessments making it easier for us to fit multilevel models and enjoy some of their cool properties such as partial pooling. Since the study was a series of retrospective clinical observations, it is not ethically acceptable for us to publish patients' raw data. However, you, dear reader, can still inspect all Bayesian models that were fitted, explore their specifications, prior predictions or even use our posteriors as priors for your own research! Kind regards, Josef Mana

Files

Steps to reproduce

Use the "mana_dbs_cognition_prior_predictive_check.R" script to derive a prior predictive check for the "time-only" model. Use the "mana_dbs_cognition_tab5_tab6.R" script to generate Table 5 and Table 6 from the main text. I think that the most instructive outcome of this study is that it shows how respecting some theoretical considerations behind cognitive functions (in this study it was respecting the psychometric distinction between observed task scores and derived latent function scores) can go long way in enhancing our predictions for practical purposes. Try this: 1) go to the "models with weakly informative priors" folder, 2) load the "time-only" model as m0, the "cognitive functions" model as m1, and the "cognitive tasks" model as m2 to your R environment (with brms package loaded), 3) type loo_compare(m0, m1, m2) and you will see that m1 (the "cognitive functions" model) easily outperforms the other two. Since I could not provide the raw data, you cannot use the files to calculate stacking weights of these models (see dx.doi.org/10.1214/17-BA1091 for more information about Bayesian stacking). I did not have space in the article to report these weights, but calculated them nevertheless, observe: * weight of the "time-only" model = 0.170, * weight of the "cognitive functions" model = 0.713, * weight of the "cognitive tasks" model = 0.117 It seems to me that a reasonable preprocessing of raw cognitive scores can lead to better predictive results, and that is on top of the fact that it makes interpretation more straightforward. The way I approached this problem in this article was very simple and flawed in a lot of regards (e.g., a formal theory of the functions extracted by factor analysis in this article is missing). However, I believe it is a step forward. Too often I still see in the neuropsychological and neurological literature researchers analyzing their cognitive data on raw task scores and interpreting the results as reflecting the latent cognitive functions. We can do better! Warm regards, Josef Mana

Categories

Parkinson's Disease, Cognitive Impairment, Deep Brain Stimulation, Statistical Modeling, Clinical Neuroscience, Neuropsychology

Licence