The increased adoption of electronic medical records (EMR) systems and emergence of clinical data warehouses to integrate data from diverse data sources energized clinical research and prompted the biomedical informatics community to envision and implement efficient and effective tools to facilitate conduct of research. Data warehousing, a valuable platform to provide clinical data for secondary use, is one tool, traditionally built using relational database models. Though relational models proved solid in data management applications across industries, the complexity and variety of clinical data require an agile technical environment that responds to evolving research data needs. A property graph model’s data connectedness, data exploration, and visualization capabilities make it a solid candidate to represent and manage clinical knowledge. This study uses acute kidney injury (AKI) disease, an important and often overlooked disease process, to represent clinical data extracted from institutional data warehouse in a graph model. The resulting AKI graph model, which consists of entities (nodes) connected through meaningful relationships (edges), provides easy access to explore and view query results in either graphical or tabular format. The AKI model, conceptually a data lake, is horizontally scalable, which can integrate with other graphbased clinical domains of knowledge. Moreover, the AKI graph schema provides the right structure for a Bayesian network, which helps implement a Bayesian inference model to estimate AKI patients’ outcomes probabilities, and also helps envision a Markov Chain transitions model to predict non-AKI patients’ probabilities of requiring dialysis within a 48-hour.
Continuous pharmaceutical manufacturing offers advantages in cost, efficiency and acceleration in process development, particularly in the time of increasing research and development, and production costs. A science-based approach to process development promoted by the Quality by design paradigm requires incorporating the effect of variability in material properties and process conditions on product properties. This has increased focus on development of detailed and complex models capturing phenomenon from several scales, which led to an increase in computational expense. The limitations are exacerbated when several such models are integrated to simulate a continuous manufacturing process. This dissertation explores several modeling methods for the development of hybrid models of particulate processes. Milling operation is used as a case study for hybrid model development that supports the Quality by design approach and also addresses computational limitations. Several unit operation models are integrated to simulate a wet granulation continuous manufacturing process leading to a computationally expensive model with several variables. This dissertation also establishes efficient methodologies to utilize the high dimensional and computationally expensive integrated process models for obtaining the space in which process needs to operate, thus supporting continuous pharmaceutical process development.
Drawing from a multidisciplinary approach, I outline the importance of visual art in a democracy, specifically in the United States. Art, unlike propaganda, allows the public to discuss political agendas through a visual medium. Art can be used as a tool to articulate the public’s political wants and needs, therefore, being an agent in a democratic government. Although the relationship between art and democracy occurs before 1945, this literature solely focuses on art in the modern and post-modern era, a period that experienced great political activism and emerging art forms. Abstract Expressionism and Pop Art movements of the 1950s, had direct inspiration from the American economy – domestically and internationally. In both instances the government utilized art that was inspired by the people to support political agendas. The art of the 1960’s sparked discussion surrounding the Civil Rights issues. America not only faced inequality with race, but also of sex. Artists challenged these social norms through art, which led to changes to law and policy eliminating discriminated based on sex and race. On a larger scale, art tests the American democracy on the national and international arena. Artist can use the visual medium to send a message to the government and the government has the option to respond. Visual art is necessary for a democracy to rightly function as it intends to influence, critique, and propel civic agendas and priorities by and for the general public. Democracy is not meant to stay static. On the contrary, for a democracy to function, it must continue to grow and adapt to the needs of the people. Art is a powerful tool for a democracy to rightly function.
In this research work, silicon microchannels are studied for computational analysis of heat transfer and fluid flow characteristics. Different designs of silicon microchannels were modeled and simulated in ANSYS FLUENT, evaluating thermal distributions for various boundary conditions. The operating parameters were inlet velocity, inlet temperature, and geometric configurations, under a constant surface heat flux condition. Microchannel cooling enhances heat transfer coefficients, thus allowing a high-power capacity. For a high heat-dissipating system, liquids provide better efficiency and capacity than air as a coolant. Hence water is used as the working medium in the microchannels.Fabrication of silicon substrates prefers the rectangular geometry for microchannel design. For efficient design, geometric configurations considered in the modeling are varied from 100 x 50um to 500 x 200um. The length of microchannels fluctuates in between 1mm and 4.5mm. The configurations considered were, Straight, U-shaped and Serpentine microchannels. Straight microchannels observed the best fluid flow characteristics. U-shaped microchannels had an increased pressure drop in the channels, but it showed better heat transfer characteristics than straight microchannels. The most effective in terms of heat transfer characteristics were the Serpentine microchannels. Straight microchannel showed an optimized heat transfer and fluid flow characteristics. Hence variations in it were verified for improved cooling performance. Based on the analysis, there is enhanced heat transfer rates at the cost of a massive pressure drop.
The developmental sequence of speech motor control has yet to be directly examined in the emergence of spoken language. Contemporary accounts of the emergence of spoken language traditionally address speech motor control as part of the maturational process. The present study investigates the developmental sequence of speech motor control in the transition from babble to word productions.Speech motor control of the jaw, lips, and tongue was observed longitudinally from nine to 16 months of age in five English speaking children. Predictions of speech motor control were evaluated for spontaneous vocalizations from the production of babble to referential words. Results confirmed that speech sound productions in babble and words at the onset of spoken language are controlled with the child’s available motor skills. As predicted, the jaw was the first of the three articulators to have independent graded control in the emergence of word productions. Lip control was observed second as the child began producing referential words. At 16 months there was no evidence of independent tongue control in the production of babble, words, or referential words. These findings indicate that speech production at the onset of spoken language is enabled by the motor control available to the child. The results of this study add an additional variable to be considered in theoretical perspectives that attempt to explain the onset of spoken language. Early developmental milestones of the speech motor system have yet to be identified in the emergence of spoken language. The results of this study identify the motor milestones for the jaw and the lip at the onset of word productions. These findings provide a first step in investigation of speech motor control and a basis for investigating therapeutic approaches that consider these skills.
BackgroundStroke is a leading cause of long-term disability in adults. Functional use of the upper limb, specifically the hand, is essential for independent living. Despite important research efforts, many individuals do not regain long-term upper limb function after sustaining a stroke. Collectively, the work presented here addresses key issues in stroke rehabilition for the upper limb - namely, evaluation of a novel training protocol for persons with severe impairment, determining the effects of a higher dose of upper limb training initiated in the acute and early sub-acute period post-stroke, and assessing the validity and effectiveness of two influential prediction models for stroke.MethodsAll studies were initiated within the first month post-stroke to take advantage of the unique neuroplasticity occurring at that time and were conducted on an inpatient rehabilitation unit. The first study was a longitudinal study which included five individuals with severe hand paresis post-stroke. This study evaluated the feasibility and outcomes of a priming method that utilized mirror visual feedback and contralateral passive range of motion combined with a force modulation task in persons with severe hand impairment. The outcomes included the Upper Extremity Fugl-Meyer Assessment (UEFMA), the Action Research Arm test (ARAT), maximum pinch force, and bilateral maps of cortical reorganization via Transcranial Magnetic Stimulation (TMS). The second study was a non-randomized, two armed intervention study that evaluated the benefits of eight additional hours of intensive upper limb training with individuals with moderate arm paresis. There were seven subjects in the Virtual Reality(VR)/robotic treatment group, and six in the control group. Outcomes included the Wolf Motor Function Test, the UEFMA, wrist AROM, and maximum pinch force, as well as bilateral maps of cortical organization using TMS. Lastly, the third study evaluated the validity and methodology of two influential prediction models for stroke – the Proportional Recovery Rule and the Predicted Recovery Potential (PREP2) algorithm.ResultsFor the first study, results showed feasibility of performing this training so early after stroke, as well as clinically significant long-term gains on all clinical measures in this group. However, without a control group it was not possible to determine how much of these gains were from the additional training or from biological recovery combined with the usual care they were concurrently receiving. The second study showed the feasibility of performing intense hand focused upper limb training and multiple clinical and neurophysiologic tests within the first month post-stroke. Importantly, it also showed that an extra eight hours of intensive VR/robotic based upper limb training led to significantly greater gains in long-term impairment compared to usual care. For the third study, trends showed that additional training initiated within one month post lesion may allow for greater than predicted proportional recovery in persons with functional Corticospinal Tracts. The study results also showed that further evaluation of the method used to determine the presence of motor evoked potentials (an indicator of Corticospinal tract function) for the PREP2 algorithm is justified.ConclusionAlthough preliminary in nature, the results presented here may be useful for future development of effective upper limb training protocols for rehabilitation in the acute and early sub-acute periods for persons at all levels of impairment post-stroke.
This dissertation presents a distributed multi-user MIMO Wi-Fi architecture referred to as D-MIMO that boosts network throughput performance compared to state-of-the-art Wi-Fi access points with co-located antennas. D-MIMO, at a high level, is a technique by which a set of wireless access points are synchronized and grouped together to jointly serve multiple users simultaneously. The cooperation between the access points reduces intra-network interference and hence improves spatial reuse of channels. We study D-MIMO Wi-Fi networks in four broad sections: (i) by prescribing lightweight and effective solutions to the problems of channel access and multi-user MIMO user selection in D-MIMO Wi-Fi, (ii) through experimental evaluations of the proposed solutions on a D-MIMO Wi-Fi network implemented in an indoor testbed using software defined radio platforms, (iii) by constructing a deep reinforcement learning framework to address dynamic resource management in D-MIMO Wi-Fi networks, and (iv) by investigating the benefits that the D-MIMO architecture brings to dense Wi-Fi networks operating in mmWave (60 GHz) bands. These components form the original contributions of this dissertation to knowledge. Designing a D-MIMO Wi-Fi network invites us to revisit fundamental Wi-Fi concepts such as carrier sensing multiple access that governs medium/channel access among Wi-Fi access points. We propose a medium access protocol for D-MIMO that assimilates channel sensing observations from different access points to resolve channel contention among D-MIMO groups. We also propose a novel way of using channel reciprocity and the network topology to select downlink multi-user (MU) MIMO recipients without requesting any form of channel state information feedback from the users during the selection phase. The proposed solutions are lightweight, do not require modifications at the user equipment, and hence will work with legacy 802.11ac devices. We compare the performance of the D-MIMO configuration to that of baseline dense Wi-Fi deployments (access points with co-located antennas), operating in 5 GHz bands, through extensive network simulations. We observe an improvement of 3.5x in median and 191% in mean user throughput, as well as a reduction of 61% in channel access delay with D-MIMO.Next, we present an implementation of a distributed MIMO Wi-Fi group---using software defined radio platforms---in an indoor experimental testbed. The implemented setup consists of four two-antenna Wi-Fi access points (synchronized in time and phase using a GPS-disciplined clock reference system) and twenty two-antenna users, and is compliant with the 802.11ac very high throughput framework. We use this setup to serve as a proof-of-concept of the proposed lightweight MU-MIMO user selection algorithm. Through extensive experimental evaluations, we demonstrate that the proposed algorithm outperforms a simple random user selection strategy by achieving an improvement of up to 60% in median and 43% in mean group throughput performance. Furthermore, the proposed user selection algorithm performs close to optimality---the difference in performance between the proposed user selection algorithm and optimal user selection is a mere 13%.As the third installment of this dissertation, we address two dynamic resource management problems germane to D-MIMO Wi-Fi networks: (i) channel assignment of D-MIMO groups, and (ii) deciding how to cluster access points to form D-MIMO groups, in order to maximize user throughput performance. These problems are known to be NP-Hard for which only heuristic solutions exist in literature and we explore the potential of harnessing principles from deep reinforcement learning (DRL) to address these challenges. We construct a DRL framework through which a learning agent interacts with a D-MIMO Wi-Fi network, learns about the network environment, and successfully converges to policies that effectively address the aforementioned challenges. Through extensive simulations and on-line training based on D-MIMO Wi-Fi networks, we demonstrate the efficacy of DRL agents in achieving an improvement of 20% in user throughput performance compared to heuristic solutions, particularly when network conditions are dynamic. This work also showcases the effectiveness of DRL agents in meeting multiple network objectives simultaneously, for instance, maximizing throughput of users as well as fairness of throughput distribution among them.In the final part of this dissertation, we consider dense Wi-Fi networks operating in mmWave (60 GHz) bands and use the D-MIMO architecture to improve user throughput performance in these networks compared to baseline arrangements. Rigorous network simulation results reveal an enhancement of 395% in average user throughput and a reduction of 75% in channel access delay with D-MIMO compared to baseline. We observe an interesting behavior wherein a user achieves very high modulation and coding scheme indices more number of times with the baseline configuration compared to D-MIMO, especially when the user is located close to an access point (AP). This behavior can be ascribed to two causes: i) a higher probability of line-of-sight of the short distance AP-user link (that favors baseline), and ii) a ramification of the use of zero-forcing precoding to cancel inter-user interference in D-MIMO. This observation motivates the design of future networks as amalgams of both baseline and D-MIMO arrangements.
Over the decades, practitioners and researchers alike have increasingly focused on how organization members can effectively share knowledge in an effort to create and maintain knowledge-intensive services. The growing interest in knowledge sharing is due in part to the increased digitalization and specialization of work practices. For example, the advance of computer-aided design, 3D printing, programming languages, financial regulation, and algorithmic stock trading places an increasing requirement on organization members to keep up with changes in their environment. Rapid technological and regulatory changes drastically impact and change how knowledge-intensive services must be approached. Organization members are unable to independently develop the expertise needed to create, maintain, and deliver complex services on their own. Knowledge sharing allows organization members to rely on others to provide services. Effective knowledge sharing increases organizational member’s performance, and in turn benefits organizations. However, organization members are faced with challenges that hinder knowledge sharing. Organization members become experts by repeatedly engaging in their area of expertise. Repeated engagement in an area limits the ability to generate expertise in other areas. The way organization members approach problems, the solutions they see, and the way they communicate is impacted and grounded by their repeated engagement. Organization members with different expertise have unique vocabulary, interpretations, and work practices. This dissertation examines how awareness of differences and the development of common ground between organization members can ease knowledge sharing. In doing so it is tested whether awareness of difference is sufficient for knowledge sharing compared to the existence of common ground between organization members. A mixed methods approach, blending social network analysis with observations and interviews, is used to answer the primary research question and hypotheses. Observations, interviews, and social network data is used to map the communicative relationships between organization members and identify the statistical likelihood of their co-occurrence in three organizations. The observations and interviews are analyzed using a grounded theory approach and content analysis, while the social network survey data is analyzed using descriptive statistics, quadratic assignment procedures, and exponential random graph modeling. In aggregate, this dissertation examines the type of communication and relational mechanisms that ease knowledge sharing between organization members.
Most information today forgoes a solely physical medium and resides in a digital format; however, the information may be altered or lost over time. There is a need to create a library that will persist for generations with relevant information and be accessible to anyone globally. Possible formats for data storage include microfilm, magnetic disks, solid state drives, DNA data storage, optical data storage, holographic data storage, and cloud computing. All give various solutions to longevity and physical and data integrity but the proposed route utilizes cloud computing due to its growing market and increase of use in businesses and education. The data contained within needs to maintain a high level of data authenticity and integrity. It will require a way to maintain a system of error-correcting codes to make sure the data is unaltered during storage or transfers. I discuss the basic architecture of data centers, the cost of powering them, utilizing more space as data is added, and global load balancing. In regards to the data, I address containment, and solutions on how to deal with environmental and human accidents, severe weather conditions, law and copyright issues, hacking and in extreme cases an electromagnetic pulse from a nuclear explosion. The main data centers should be duplicated in distant and secure locations in both developing and established areas. Data contained in the library must be analyzed for relevance and review to prevent additions of unnecessary or inaccurate data. Finally, I propose best practices with the current information compiled on the creation of a long-term data retention library and what possible future solutions instead.