Filter Results
40780 results
BACKGROUND:The aim of this study is to validate a previously published consensus-based quality indicator set for the management of patients with traumatic brain injury (TBI) at intensive care units (ICUs) in Europe and to study its potential for quality measurement and improvement. METHODS:Our analysis was based on 2006 adult patients admitted to 54 ICUs between 2014 and 2018, enrolled in the CENTER-TBI study. Indicator scores were calculated as percentage adherence for structure and process indicators and as event rates or median scores for outcome indicators. Feasibility was quantified by the completeness of the variables. Discriminability was determined by the between-centre variation, estimated with a random effect regression model adjusted for case-mix severity and quantified by the median odds ratio (MOR). Statistical uncertainty of outcome indicators was determined by the median number of events per centre, using a cut-off of 10. RESULTS:A total of 26/42 indicators could be calculated from the CENTER-TBI database. Most quality indicators proved feasible to obtain with more than 70% completeness. Sub-optimal adherence was found for most quality indicators, ranging from 26 to 93% and 20 to 99% for structure and process indicators. Significant (p < 0.001) between-centre variation was found in seven process and five outcome indicators with MORs ranging from 1.51 to 4.14. Statistical uncertainty of outcome indicators was generally high; five out of seven had less than 10 events per centre. CONCLUSIONS:Overall, nine structures, five processes, but none of the outcome indicators showed potential for quality improvement purposes for TBI patients in the ICU. Future research should focus on implementation efforts and continuous reevaluation of quality indicators. TRIAL REGISTRATION:The core study was registered with ClinicalTrials.gov, number NCT02210221, registered on August 06, 2014, with Resource Identification Portal (RRID: SCR_015582).
Data Types:
  • Document
Biological stock centres collect, care for and distribute living organisms for scientific research. In the 1990s, several of the world's largest Drosophila (fruit fly) stock centres were closed or threatened with closure. This paper reflects on why this happened, and uses the visibility of these endings to examine how stock centre collections are managed, who maintains them and how they are kept valuable and accessible to biologists. One stock centre came under threat because of challenges in caring for flies and monitoring the integrity of stocks. Another was criticized for keeping too many 'archival' stocks, an episode that reveals what it can mean for a living scientific collection to remain 'relevant' to a research community. That centre also struggled with the administrative and documentary practices that have proved crucial for sustaining a collection's meaning, value and availability. All of the stock centres in this story faced challenges of how to pay for care and curation, engaging with a problem that has been discussed by biologists and their funders since the 1940s: what are the best models for stock provision, and how could these models be changed?
Data Types:
  • Document
This study is a reconsideration of the place and significance of the natural light of human reason in attaining knowledge of God according to the thought of Henri de Lubac. In the Anglophone theological literature of the later twentieth century, a notable consensus has emerged that de Lubac’s celebrated text, Surnaturel (1946) implicitly disallows any status for natural knowledge of God in advance of the activity of created grace. The present thesis is concerned with the way in which de Lubac’s clarification of the ‘pure nature’ debate was related to an even more pervasive concern with ‘natural theology’; and that—given de Lubac’s central theological vision and the actual evidences within his total œuvre—the seeming consensus on the implications of Surnaturel may not stand. This study seeks to clarify and reconceive the natural theology task which de Lubac undertook, arguing that it was this project, even beyond his interest in the nature-grace relation, which was his abiding life’s interest; the metaphysical attitude displayed here organizes the interior of his wider theological œuvre. De Lubac’s natural theology has been profoundly misunderstood since the time of its release, and moreover, this misunderstanding may be seen to be the most proximate reason for de Lubac’s censure by the Jesuits in 1950, which in turn perpetuated the continuing misconception. The present work of clarification proceeds by close analysis and re-assessment of de Lubac early writings, and his works on natural knowledge of God (Connaissance and Chemins), as well by reference to the hitherto unseen historical archive of the Jesuits in Rome, in order to chart the full trajectory of de Lubac’s thought, to show that he may not be said to collapse the classical Thomist distinction between natural theology and revelation. The study concludes with an assessment of the success of de Lubac’s thought in view of contemporary philosophy of religion debates, and questions the extent to which his natural theology might be relevant today. A dissertation submitted for the degree of Doctor of Philosophy in the Faculty of Divinity
Data Types:
  • Document
Penalized likelihood approaches are widely used for high-dimensional regression. Although many methods have been proposed and the associated theory is now well developed, the relative efficacy of different approaches in finite-sample settings, as encountered in practice, remains incompletely understood. There is therefore a need for empirical investigations in this area that can offer practical insight and guidance to users. In this paper, we present a large-scale comparison of penalized regression methods. We distinguish between three related goals: prediction, variable selection and variable ranking. Our results span more than 2300 data-generating scenarios, including both synthetic and semisynthetic data (real covariates and simulated responses), allowing us to systematically consider the influence of various factors (sample size, dimensionality, sparsity, signal strength and multicollinearity). We consider several widely used approaches (Lasso, Adaptive Lasso, Elastic Net, Ridge Regression, SCAD, the Dantzig Selector and Stability Selection). We find considerable variation in performance between methods. Our results support a "no panacea" view, with no unambiguous winner across all scenarios or goals, even in this restricted setting where all data align well with the assumptions underlying the methods. The study allows us to make some recommendations as to which approaches may be most (or least) suitable given the goal and some data characteristics. Our empirical results complement existing theory and provide a resource to compare methods across a range of scenarios and metrics.
Data Types:
  • Document
Summary:Hypogonadotropic hypogonadism is characterised by insufficient secretion of pituitary gonadotropins resulting in delayed puberty, anovulation and azoospermia. When hypogonadotropic hypogonadism occurs in the absence of structural or functional lesions of the hypothalamic or pituitary gland, the hypogonadism is defined as idiopathic hypogonadotropic hypogonadism (IHH). This is a rare genetic disorder caused by a defect in the secretion of gonadotropin releasing hormone (GNRH) by the hypothalamus or a defect in the action of GNRH on the pituitary gland. Up to 50% of IHH cases have identifiable pathogenic variants in the currently known genes. Pathogenic variants in the GNRHR gene encoding the GNRH receptor are a relatively common cause of normosmic IHH, but reports of pathogenic variants in GNRH1 encoding GNRH are exceedingly rare. We present a case of two siblings born to consanguineous parents who were found to have normosmic idiopathic hypogonadotropic hypogonadism due to homozygosity of a novel loss-of function variant in GNRH1. Case 1 is a male who presented at the age of 17 years with delayed puberty and under-virilised genitalia. Case 2 is a female who presented at the age of 16 years with delayed puberty and primary amenorrhea. Learning points:IHH is a genetically heterogeneous disorder which can be caused by pathogenic variants affecting proteins involved in the pulsatile gonadotropin-releasing hormone release, action, or both. Currently known genetic defects account for up to 50% of all IHH cases. GNRH1 pathogenic variants are a rare cause of normosmic IHH. IHH is associated with a wide spectrum of clinical manifestations. IHH can be challenging to diagnose, particularly when attempting to differentiate it from constitutional delay of puberty. Early diagnosis and gonadotrophin therapy can prevent negative physical sequelae and mitigate psychological distress with the restoration of puberty and fertility in affected individuals.
Data Types:
  • Document
Simulation can offer researchers access to events that can otherwise not be directly observed, and in a safe and controlled environment. How to use simulation for the study of how to improve the quality and safety of healthcare remains underexplored, however. We offer an overview of simulation-based research (SBR) in this context. Building on theory and examples, we show how SBR can be deployed and which study designs it may support. We discuss the challenges of simulation for healthcare improvement research and how they can be tackled. We conclude that using simulation in the study of healthcare improvement is a promising approach that could usefully complement established research methods.
Data Types:
  • Document
Crisis, pandemic, intellectual property, licensing, patent pledge, compulsory licensing, incumbents, new entrants, COVID-19
Data Types:
  • Document
The information gained by making a measurement, termed the Kullback-Leibler divergence, assesses how much more precisely the true quantity is known after the measurement was made (the posterior probability distribution) than before (the prior probability distribution). It provides an upper bound for the contribution that an observation can make to the total likelihood score in likelihood-based crystallographic algorithms. This makes information gain a natural criterion for deciding which data can legitimately be omitted from likelihood calculations. Many existing methods use an approximation for the effects of measurement error that breaks down for very weak and poorly measured data. For such methods a different (higher) information threshold is appropriate compared with methods that account well for even large measurement errors. Concerns are raised about a current trend to deposit data that have been corrected for anisotropy, sharpened and pruned without including the original unaltered measurements. If not checked, this trend will have serious consequences for the reuse of deposited data by those who hope to repeat calculations using improved new methods.
Data Types:
  • Document
Imputation is a powerful statistical method that is distinct from the predictive modelling techniques more commonly used in drug discovery. Imputation uses sparse experimental data in an incomplete dataset to predict missing values by leveraging correlations between experimental assays. This contrasts with quantitative structure–activity relationship methods that use only descriptor – assay correlations. We summarize three recent imputation strategies – heterogeneous deep imputation, assay profile methods and matrix factorization – and compare these with quantitative structure–activity relationship methods, including deep learning, in drug discovery settings. We comment on the value added by imputation methods when used in an ongoing project and find that imputation produces stronger models, earlier in the project, over activity and absorption, distribution, metabolism and elimination end points.
Data Types:
  • Document
Aim:The increased morbidity and mortality due to type 2 diabetes can be partly due to its delayed diagnosis. In developing countries, the cost and unavailability of conventional screening methods can be a setback. Use of random blood glucose (RBG) may be beneficial in testing large numbers at a low cost and in a short time in identifying persons at risk of developing diabetes. In this analysis, we aim to derive the values of RBG corresponding to the cut-off values of glycosylated hemoglobin (HbA1c) used to define prediabetes and diabetes. Methods:Based on their risk profile of developing diabetes, a total of 2835 individuals were screened for a large diabetes prevention study. They were subjected to HbA1c testing to diagnose prediabetes and diabetes. Random capillary blood glucose was also performed. Correlation of RBG with HbA1c was computed using multiple linear regression equation. The optimal cut-off value for RBG corresponding to HbA1c value of 5.7% (39 mmol/mol), and ≥ 6.5% (48 mmol/mol) were computed using the receiver operating curve (ROC). Diagnostic accuracy was assessed from the area under the curve (AUC) and by using the Youden's index. Results:RBG showed significant correlation with HbA1c (r=0.40, p<0.0001). Using the ROC analysis, a RBG cut-off value of 140.5 mg/dl (7.8 mmol/L) corresponding to an HbA1c value of 6.5% (48mmol/mol) was derived. A cut-off value could not be derived for HbA1c of 5.7% (39 mmol/mol) since the specificity and sensitivity for identifying prediabetes were low. Conclusion:Use of a capillary RBG value was found to be a simple procedure. The derived RBG cut-off value will aid in identifying people with undiagnosed diabetes. This preliminary screening will reduce the number to undergo more cumbersome and invasive diagnostic testing.
Data Types:
  • Document