Filter Results
8956 results
  • The harnessing of steam and electricity in the mid-nineteenth century created a new world of possibilities in business, politics, and public life. In no realm was this transformation more momentous than in communications, an activity commonly understood at this time to embrace not only the trans-local circulation of information, but also the long-distance transportation of people and goods (Matterlart 1996, 2000). For the first time in world history, merchants could convey overseas large quantities of goods on a regular schedule and exchange information at a speed greater than a ship could sail. New organizations sprang up to take advantage of this "communications revolution," as this transformation has come to be known (John 1994). Some were public agencies; others were private firms. Each was shaped not only by the harnessing of new energy sources, but also by the institutional rules of the game. These rules defined the relationship of the state and the market, or what economic historians call the political economy. This chapter surveys this transformation, which we have come to view with fresh eyes following the commercialization of the Internet in the 1990s. It features case studies of two well-documented global communications organizations that originated in the nineteenth century - undersea cable companies and news agencies - which we have supplemented by a brief discussion of other important global communications organizations: radio, telephony, and the mail. We have not surveyed film, a topic addressed by Peter Miskell's chapter in this Handbook.
    Data Types:
    • Document
  • Although the majority of scientists agree that we are facing unprecedented climate crises, higher education’s engagement with environmental and sustainability problems is lacking. While the role of human behavior on climate change has been well established by science, these insights have yet to be adequately applied by citizens, thus exacerbating the consequent economic and social problems (like inequity and poverty). In response to the imminent danger of climate change, calls have come for citizens to be mindful of their actions to reverse the deteriorating trajectory of environmental and sustainability decline. In particular, policymakers have deemed higher education classrooms a promising site for equipping future generations of citizens to engage with sustainability. Formal teaching and learning surrounding sustainability-related subject matter, or Education for Sustainability (EfS), is the process of developing students’ knowledge, attitudes, and behaviors toward sustainability. However, EfS is not being incorporated into the higher education curriculum with either the quantity or quality necessary to steer society toward social change. Therefore, the purpose of this dissertation study was to explore the amount of, and the effectiveness of, EfS in an institution of higher education, and to analyze whether EfS was related to students’ sustainability learning outcomes. Data collection took place at Michigan State University, a public, large-size, four-year institution. Students were surveyed at both the beginning and end of the fall 2017 semester to measure changes over one academic semester. Guided by the frameworks of opportunity to learn, cognitively responsive teaching, teaching for sustainability, and transformative sustainability learning outcomes, data were analyzed with logistic and ordinary least squares regression, and Structural Equation Modeling (SEM). Results found that approximately two-thirds of participants reported that they had the opportunity to learn about sustainability. On average, neither cognitively responsive teaching, nor teaching for sustainability, pedagogical approaches were employed to teach sustainability. Interestingly, though, when instructors surfaced students’ prior knowledge about sustainability while teaching the subject, students’ sustainability behaviors increased over the course of the semester. As such, this study illustrated the importance of the pedagogical technique of utilizing students’ prior knowledge when teaching them about sustainability in higher education.
    Data Types:
    • Document
  • This dissertation examines how debates about racial susceptibility to asthma changed from 1880 to 1990 alongside a growing disparity in Black and white asthma morbidity. At the turn of the century, doctors believed asthma was exclusive to whites, due to the stresses of urban life on their delicate constitution. But by 1960, the first year the CDC’s National Center for Health Statistics collected asthma data, the rate of asthma mortality in Blacks was nearly twice that in whites. After neglecting asthma in Black communities for sixty years, doctors scrambled to articulate its manifestation in urban communities of color. In those decades, the urban American landscape dramatically evolved, as the Great Migration brought six million African-Americans to Northern cities, but segregation and other racist policies created black metropolises laden with dilapidated public housing, high rates of unemployment, and environmental toxins from garbage dumps, waste plants, train tracks, and bus depots. Growing racial tensions and expanded funding opportunities during the Civil Rights Movement spurred an overwhelming production of research on asthma and race, with explanations ranging from meteorological episodes, environmental pollution, and indoor allergens to biological, genetic, and even psychological factors. Although research primarily focused on psychosomatic, environmental, and genetic causes, Black activists and community leaders used asthma data to mobilize for social equality in housing, neighborhoods, health, and education. At the cross-section of the history of medicine, social history, and environmental justice history, this dissertation examines the changing debates on racial susceptibility to asthma, the effects of the Great Migration and segregation on Black health, and both tensions and alliances between doctors, patients, and activists battling asthma in Black urban communities.
    Data Types:
    • Document
  • Both small molecules and antibodies are powerful tools for research in biological mechanisms and therapeutics. The discovery of such molecules involves two opposite starting points: one being specific targets and the other being phenotypic screens. The first part of this thesis focuses on drug development starting with a specific target. The second part of this thesis focuses on identification of ferroptosis biomarkers by phenotypic screen. The specific target highlighted in the first part of this thesis is KRAS (Kirsten rat sarcoma viral oncogene homolog), the most commonly mutated oncogene in human pancreatic cancers, colorectal cancers, and lung cancers. The high prevalence of KRAS mutations and its prominent role in many cancers make it a potentially attractive drug target; however, it has been difficult to design small molecule inhibitors of mutant K-Ras proteins. Here, we identified a putative small molecule binding site on K-RasG12D, which we have termed the P110 site (due to its adjacency to proline 110), using computational analyses of the protein structure. We then confirmed that one compound, named K-Ras Allosteric Ligand KAL-21404358, might bind to the P110 site of K-RasG12D using a combination of computational and biochemical approaches. The phenotypic screen used in the second part of this thesis focus on the process of ferroptosis, a form of regulated cell death process driven by the iron-dependent accumulation of polyunsaturated-fatty-acid-containing phospholipids (PUFA-PLs). Currently, there is no way to selectively stain ferroptotic cells in tissue sections to characterize relevant models and diseases. To circumvent this problem, we immunized mice with membranes from diffuse large B Cell lymphoma (DLBCL) cells treated with piperazine erastin (PE), and screened the generated monoclonal antibodies. The results suggested that for the first time we could detect cells undergoing ferroptosis in human tissue sections. In summary, these two projects illustrate how molecular screening and design starting from either a specific target or a phenotype screen aid in drug and biomarker development.
    Data Types:
    • Document
  • How do technologies that remove warfighters from the front lines affect the frequency and intensity of military confrontations between states? Many scholars and policymakers fear that weapons that reduce the risks and costs of war – in blood and treasure – will lead states to resort to force more frequently during crises, destabilizing the international security environment. These concerns have featured prominently in debates surrounding the proliferation and use of remote warfighting technologies, such as drones. This project sets out to evaluate whether and how drones affect crisis escalation. Specifically, do drones allow decisionmakers to deploy military forces more frequently during interstate crises? Once deployed, how do these systems affect escalation dynamics? I argue that drones can help control escalation, raising questions about scholarly theories that suggest the world is more dangerous and less stable when technology makes conflict cheaper and less risky. At the core of this project is a theory of technology-enabled escalation control. The central argument is that technologies like drones that remove friendly forces from the battlefield may lead states to use force more frequently, but decrease the likelihood of escalation when used in lieu of inhabited platforms. More specifically, these technologies lower the political barriers to initiating military operations during crises, primarily by eliminating the risk of friendly force casualties and the associated domestic political consequences for launching military operations. At the same time, removing personnel from harm’s way may reduce demand for escalatory reprisals after remotely operated systems are lost to hostile action. Drones can also help to mitigate escalatory spirals by collecting intelligence that overcomes information asymmetries that often contribute to armed conflict, helping facilitate more measured decision-making and tailored targeting of enemy forces. By more fully considering how technology affects escalatory dynamics after the initial use of force, technology-enabled escalation control theory advances our understanding of the link between technology and conflict. I test the theory using a multi-method approach that combines case studies with original experiments embedded in surveys fielded on public and military samples. The dissertation also introduces a new research method for international relations research: experimental manipulations embedded in wargames with military participants. In Chapter 1 and 2, I define the concept of crisis escalation and review the literature that examines the effect of technology on escalation and conflict dynamics. I then introduce the theory of technology-enabled escalation control and outline four mechanisms that undergird the theory – increased initiation, tempered/tailored targeting, restrained retaliation, and amplified aggression. Each of these hypothesized mechanisms describes ways in which emerging technologies can prevent crises from escalating into broader or more intense conflicts. Chapter 3 describes each component of the multi-method research design that I use to test the theory in Chapters 4 through 7. Chapter 4 uses experiments embedded in surveys and wargames to assess whether and how drones allow states to more frequently initiate military operations. Chapter 5 tests whether drones enable decisionmakers to control escalation by restraining retaliation after attacks on a state’s drones. Chapter 6 and 7 test the theory in the context of U.S drone use during the Cold War and Israeli drone use from the 1960s through late-2010s. The findings of these empirical tests provide strong support for technology-enabled escalation control. In Chapter 8, I conclude with a summary of the analysis and test the generalizability of the theory beyond the state use of drones. I find that tenets of technology-enabled escalation control explain escalation dynamics associated with U.S. cyber operations against North Korea and Hezbollah’s use of drones against Israel and during the Syrian Civil War. The chapter also maps out pathways for future research and identifies policy implications. My findings suggest the growing proliferation of drones will increase the frequency of military confrontations during crises, yet these confrontations are unlikely to escalate. Even though drones may help control escalation, clearer doctrine, rules of engagement, and international agreements to govern their use will help to further avoid crisis escalation and conflict.
    Data Types:
    • Document
  • This dissertation investigates the origin of the architectural typology of the Renaissance palace as it emerged in Florence between the end of the fourteenth and the beginning of the fifteenth centuries. This was a period characterized by a dramatic shift in domestic architecture, mirroring a parallel transformation of the Florentine society under the political regime of the Albizi oligarchy. This study fills a clear gap in existing scholarship, comprehensively addressing the private palatial architecture built in Florence in the sixty years before the construction of Palazzo Medici in 1446. Three palaces and their family archives have been studied for the first time: Palazzo Alessandri (built in the 1370s), Palazzo da Uzzano-Capponi (built circa 1411), and Palazzo Busini-Bardi (built before 1425). Their patrons, all pairs of brothers, used the size and urban prominence of their new residences to assess their political and social dominance on the city. They eliminated all commercial functions from their palaces and organized the space around a central courtyard with loggias, with a multiplication of dedicated rooms for the different public and private functions of the household. These palaces are representative of a period of transition in domestic architecture that inaugurated a new, successful domestic typology that was subjected to little change in—at least—the following three centuries. Built in a period of rising individuality, these private buildings, together with the ones that followed, helped set the modern concepts of the apartment and family privacy.
    Data Types:
    • Document
  • With Moore’s law grinding to a halt, accelerators are one of the ways that new silicon can improve performance, and they are already a key component in modern datacenters. Accelerators are integrated circuits that implement parts of an application with the objective of higher energy efficiency compared to execution on a standard general purpose CPU. Many accelerators can target any particular workload, generally with a wide range of performance, and costs such as area or power. Exploring these design choices, called Design Space Exploration (DSE), is a crucial step in trying to find the most efficient accelerator design, the one that produces the largest reduction of the total cost of ownership. This work aims to improve this design space exploration phase for accelerators and to avoid pitfalls in the process. This dissertation supports the thesis that early design choices – including the level of specialization – are critical for accelerator development and therefore require benchmarks reflective of production workloads. We present three studies that support this thesis. First, we show how to benchmark datacenter applications by creating a benchmark for large video sharing infrastructures. Then, we present two studies focused on accelerators for analytical query processing. The first is an analysis on the impact of Network on Chip specialization while the second analyses the impact of the level of specialization. The first part of this dissertation introduces vbench: a video transcoding benchmark tailored to the growing video-as-a-service market. Video transcoding is not accurately represented in current computer architecture benchmarks such as SPEC or PARSEC. Despite posing a big computational burden for cloud video providers, such as YouTube and Facebook, it is not included in cloud benchmarks such as CloudSuite. Using vbench, we found that the microarchitectural profile of video transcoding is highly dependent on the input video, that SIMD extensions provide limited benefits, and that commercial hardware transcoders impose tradeoffs that are not ideal for cloud video providers. Our benchmark should spur architectural innovations for this critical workload. This work shows how to benchmark a real world warehouse scale application and the possible pitfalls in case of a mischaracterization. When considering accelerators for the different, but no less important, application of analytical query processing, design space exploration plays a critical role. We analyzed the Q100, a class of accelerators for this application domain, using TPC-H as the reference benchmark. We found that the hardware computational blocks have to be tailored to the requirements of the application, but also the Network on Chip (NoC) can be specialized. We developed an algorithm capable of producing more effective Q100 designs by tailoring the NoC to the communication requirements of the system. Our algorithm is capable of producing designs that are Pareto optimal compared to standard NoC topologies. This shows how NoC specialization is highly effective for accelerators and it should be an integral part of design space exploration for large accelerators’ designs. The third part of this dissertation analyzes the impact of the level of specialization, e.g. using an ASIC or Coarse Grain Reconfigurable Architecture (CGRA) implementation, on an accelerator performance. We developed a CGRA architecture capable of executing SQL query plans. We compare this architecture against Q100, an ASIC that targets the same class of workloads. Despite being less specialized, this programmable architecture shows comparable performance to the Q100 given an area and power budget. Resource usage explains this counterintuitive result, since a well programmed, homogeneous array of resources is able to more effectively harness silicon for the workload at hand. This suggests that a balanced accelerator research portfolio must include alternative programmable architectures – and their software stacks.
    Data Types:
    • Document
  • Large-scale neuroimaging data is becoming increasingly available, providing a rich data source with which to study neurological conditions. In this thesis, I demonstrate the utility of large-scale neuroimaging as it applies to Alzheimer’s disease (AD) and normal aging, using univariate parametric mapping, regional analysis, and advanced machine learning. Specifically, this thesis covers: 1) validation and extension of prior studies using large-scale datasets; 2) AD diagnosis and normal aging evaluation empowered by large-scale datasets and advanced deep learning algorithms; 3) enhancement of cerebral blood volume (CBV) fMRI utility with retrospective CBV-fMRI technique. First, I demonstrated the utility of large-scale datasets for validating and extending prior studies using univariate analytics. I presented a study localizing AD-vulnerable regions more reliably and with better anatomical resolution using data from more than 350 subjects. Following a similar approach, I investigated the structural characteristics of healthy APOE ε4 homozygous subjects screened from a large-scale community-based study. To study the neuroimaging signatures of normal aging, we performed a large-scale joint CBV-fMRI and structural MRI study covering age 20-70s, and a structural MRI study of normal aging covering the full age-span, with the elder group screened from a large-scale clinic-based study ensuring no evidence of AD using both longitudinal follow-up and cerebrospinal fluid (CSF) biomarkers evidences. Second, I performed deep learning neuroimaging studies for AD diagnosis and normal aging evaluation, and investigated the regionality associated with each task. I developed an AD diagnosis method using a 3D convolutional neural network model trained and evaluated on ~4,600 structural MRI scans and further investigated a series of novel regionality analyses. I further extensively studied the utility of the structural MRI summary measure derived from the deep learning model in prodromal AD detection. This study constitutes a general analytic framework, which was followed to evaluate normal aging by performing deep learning-based age estimation in cognitively normal population using more than 6,000 scans. The deep learning neuroimaging models classified AD and estimated age with high accuracy, and also revealed regional patterns conforming to neuropathophysiology. The deep learning derived MRI measure demonstrated potential clinical utility, outperforming other AD pathology measures and biomarkers. In addition, I explored the utility of deep learning on positron emission tomography (PET) data for AD diagnosis and regionality analyses, further demonstrating the broad utility and generalizability of the method. Finally, I introduced a technique enabling CBV generation retrospectively from clinical contrast-enhanced scans. The derivation of meaningful functional measures from such clinical scans is only possible through calibration to a reference, which was built from the largest collection of research CBV-fMRI scans from our lab. This method was validated in an epilepsy study and demonstrated the potential to enhance the utility of CBV-fMRI by enriching the CBV-fMRI dataset. This technique is also applicable to AD and normal aging studies, and potentially enables deep learning based analytic approaches applied on CBV-fMRI with similar pipelines used in structural MRI. Collectively, this thesis demonstrates how mechanistic and diagnostic information on brain disorders can be extracted from large-scale neuroimaging data, using both classical statistical methods and advanced machine learning.
    Data Types:
    • Document
  • This online study evaluated the efficacy of an e-health avatar/cartoon video with women of color living with type 2 diabetes for a minimum of 2 years. After considerable dropout, the sample size declined from n=149, as 50.3% (n=75, 50.33%) did not complete the survey. After eliminating those who did not watch “all” or “most” of the video, the sample declined further. The final sample (n=64) was 31.3% (n=20) U.S.-born, 100% (n=64) female, 79.7% (n=51) Black, and 12.5% (n=8) Asian—with mean age of 49.28 (Min=22, Max=79, SD=13.24). Using backwards stepwise regression, higher post-video global self-efficacy to perform AADE7 Self-Care Behaviors™ was significantly predicted by: higher level of coping self-efficacy—stopping unpleasant emotions and thoughts (B=0.131, p =.001); and higher age (B=0.026, p=.002) with the R2=.331 (Adjusted R2 =.298; 29.8% of the variance was explained). However, less emphasis should be placed on findings from this controversial regression with small sample size. Instead, as this is an online evaluation of a brief online video intervention, what are important are results of pre- versus post-video paired t-test results. These suggested that engagement in the brief online intervention of watching the new video was associated with: a significant increase in type 2 diabetes self-management knowledge for performing the AADE7™ Self-Care Behaviors; and, a significant increase for stages of change, self-efficacy, and motivation to perform the 7 diabetes self-management behaviors. Finally, the mixed methods data were important for underscoring the value of the study’s quantitative findings. Some 89.1% (n=57) would recommend the video to other women of color living with type 2 diabetes. Reflecting how the video intervention was a true innovation in integrating a brief form of motivational interviewing with relapse prevention, consider sample emergent themes: video was motivational; and video covered relapse prevention and problem solving, using a menu of options. There is value in ensuring exposure to an e-health avatar video on the AADE7 Self-Care Behaviors™ that also integrates the evidence-based approaches of motivational interviewing and relapse prevention, in order to meet the health education needs of those diagnosed with type 2 diabetes.
    Data Types:
    • Document
  • Single-particle cryogenic electron microscopy (cryo-EM) has become a powerful mainstay tool in high resolution structural biology thanks to advances in hardware, software and sample preparation technology. In my thesis, I utilized this technique to unravel the function of various challenging biological macromolecules. My first focus was bacterial ribosomal biogenesis: understanding how bacteria assemble their ribosomes. Ribosomes are the factories of the cell, responsible for manufacturing all proteins. Ribosomes themselves are huge, with the bacterial version made of 52 proteins and 4566 RNA nucleotides. How these components assemble has long been a mystery. Early groundbreaking work sketched out a biogenesis pathway using purified components in vitro – but under non-physiological conditions. We sought to understand how the bacterial ribosome – specifically the large subunit 50S – is built inside the cell. To achieve this, we engineered a conditional knock-out bacterial strain that lacked one specific ribosomal protein (L17). This caused the cells to accumulate incomplete intermediates along the 50S biogenesis pathway. These intermediates were purified and examined with mass spectrometry and single-particle cryo-EM. Two major hurdles arose in this project: firstly, the biogenesis intermediates exhibited a preferred orientation when vitrified for cryo-EM analysis. This means that instead of showing many different views required for reconstruction of the 3D structure, the intermediates only adopted one view on the cryo-EM grid. To overcome this problem, we engineered a method to induce additional views on the microscope by tilting the stage. Using another test protein that also exhibited preferred orientation (hemagglutinin), we optimized and characterized this new tilt methodology and showed it was generally applicable to overcoming preferred orientation, regardless of type of specimen. We also created a software tool, called 3DFSC (3dfsc.salk.edu), for other microscopists to calculate the degree of directional anisotropy in their structures due to preferred orientation. Using this tilt strategy finally enabled the structural elucidation of our 50S intermediates. The second challenge in the project was the large amount of heterogeneity present in the sample. Through hierarchical 3D classification schemes using the latest software tools, we obtained 14 different 50S intermediate structures, all from imaging a single cryo-EM grid. By analyzing the missing components of each intermediate, and corroborating these observations with mass spectrometry data, we outlined the first in vivo 50S assembly pathway, and showed that ribosome assembly occurs step-wise and in parallel pathways. My second focus was on pushing the resolution limits of single-particle cryo-EM using adeno-associated virus (AAV) serotype 2 homogeneous virus-like particles (VLPs) that lack DNA. Exploiting several technical advances to improve resolution, including use of gold grids, per-particle CTF refinement, and correction for Ewald sphere curvature, we managed to obtain a 1.86 Å resolution reconstruction of the AAV2L336C variant VLP, the highest resolution icosahedral virus reconstruction solved by single-particle cryo-EM to date. Using our structure, we were able to show improvements using Ewald sphere curvature correction and shed light on the mechanistic basis as to why the L336C mutation resulted in defects in genome packaging and infectivity compared to the WT viral particles. My third focus was the understanding of small membrane proteins involved in infectious diseases. Membrane proteins are a challenge to work with due to the need for them to be extracted from the lipid bilayer for studies as compared to soluble proteins. Infectious diseases have a huge burden on society, with the top three infectious agents accounting for 2.7 million deaths in 2016. The third most deadly infectious disease is malaria, a mosquito-borne parasite which kills 450,000 people annually. One drug used early on for treating malaria was chloroquine but its usefulness waned due to development of resistance. Chloroquine resistance is mediated by the chloroquine resistance transporter (PfCRT). Although small (49 kDa) for single-particle cryo-EM, we solved its structure by using fragment antibody technology to add mass and help with image alignment and 3D reconstruction. The 3.2 Å structure resembles other drug metabolite transporters, and the chloroquine resistance mutations map to a ring around the central cavity, suggesting this central pore as the drug binding site. Tuberculosis (TB) is the top killer, above malaria and HIV/AIDS, being responsible for 1.3 million deaths. In TB, a common antibiotic target is the bacterium’s cell wall synthesis machinery. One family of such enzymes is the arabinosyltransferases, which synthesize the critical arabinose sugars. Using single-particle cryo-EM, we solved two high resolution structures of one such essential enzyme, AftD. Due to the low yield of the protein, a picoliter automated sample dispensing robot was crucial to allow for initial cryo-EM analysis. We then performed mutagenesis studies in M. smegmatis, a TB model organism, which uncovered the critical amino acid residues in the active site and determined that a bound acyl-carrier-protein was likely involved in allosteric inhibition of AftD’s active site. Another member of the family, EmbB, is the target of a widely used frontline TB drug called ethambutol. We have solved the high resolution structures of the apo and putative drug-bound states of EmbB, allowing us to map out, for the first time, both the active site and drug-resistance mutations of this crucial enzyme. The atomic structures of the functional pockets of Mycobacterial AftD and malarial PfCRT will hopefully enable structure-based drug design to improve existing drugs or potentially even develop new treatments against these infectious maladies. In conclusion, the continual and breathtaking improvements in single-particle cryo-EM methodology has been instrumental in allowing the elucidation of the aforementioned biological macromolecules from ribosome biogenesis intermediates, to AAV2 vehicle, Plasmodium drug resistance transporter to mycobacterial glycosyltransferases – structures of which help explain biological function.
    Data Types:
    • Document