16 results for qubit oscillator frequency
Contributors: Benedikt Zoefel, Rufin VanRullen
oscillations, the brain's adjustment to rhythmic stimulation, is a central...oscillations and speech sound improves speech intelligibility. However...frequency. For original speech snippets (A), there is a strong bias of...time–frequency transform of cross-correlation functions (averaged across...oscillations passively following the low-level periodicities (e.g., in...Oscillation...frequencies including the gamma-range, whereas the other (high-level) ... Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation—and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed “high-level” speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech.
The influence of music and music therapy on pain-induced neuronal oscillations measured by magnetencephalography
Contributors: Michael Hauck, Susanne Metzner, Fiona Rohlffs, Jürgen Lorenz, Andreas K. Engel
frequency band (C). ...oscillations after laser stimulation. The plots to the left show time-frequency...frequencies are displayed in each plot. Responses are computed as percentage...Oscillations ... Modern forms of music therapy are clinically established for various therapeutic or rehabilitative goals, especially in the treatment of chronic pain. However, little is known about the neuronal mechanisms that underlie pain modulation by music. Therefore, we attempted to characterize the effects of music therapy on pain perception by comparing the effects of 2 different therapeutic concepts, referred to as receptive and entrainment methods, on cortical activity recorded by magnetencephalography in combination with laser heat pain. Listening to preferred music within the receptive method yielded a significant reduction of pain ratings associated with a significant power reduction of delta-band activity in the cingulate gyrus, which suggests that participants displaced their focus of attention away from the pain stimulus. On the other hand, listening to self-composed “pain music” and “healing music” within the entrainment method exerted major effects on gamma-band activity in primary and secondary somatosensory cortices. Pain music, in contrast to healing music, increased pain ratings in parallel with an increase in gamma-band activity in somatosensory brain structures. In conclusion, our data suggest that the 2 music therapy approaches operationalized in this study seem to modulate pain perception through at least 2 different mechanisms, involving changes of activity in the delta and gamma bands at different stages of the pain processing system.
Contributors: Benjamin D. Charlton, Roland Frey, Allan J. McKinnon, Guido Fritsch, W. Tecumseh Fitch, David Reby
frequency (the main acoustic correlate of perceived pitch) . Remarkably...oscillating periodically at frequencies ranging from 10-45 Hz. The placement...frequency (F0) of bellow inhalation sections averages 27.1 Hz (range: ... During the breeding season, male koalas produce ‘bellow’ vocalisations that are characterised by a continuous series of inhalation and exhalation sections, and an extremely low fundamental frequency (the main acoustic correlate of perceived pitch) . Remarkably, the fundamental frequency (F0) of bellow inhalation sections averages 27.1 Hz (range: 9.8–61.5 Hz ), which is 20 times lower than would be expected for an animal weighing 8 kg  and more typical of an animal the size of an elephant (Supplemental figure S1A). Here, we demonstrate that koalas use a novel vocal organ to produce their unusually low-pitched mating calls.
Contributors: Sandra Pauletto, Andy Hunt
frequency assignment. ... In this paper we present two experiments on implementing interaction in sonification displays: the first focuses on recorded data (interactive navigation) and the second on data gathered in real time (auditory feedback).
Contributors: UnaElsLive Natra, mdgmatest5 live
... RDM - File Type Support 21May2019 ElsCustomer Apart from .u3d all files preview [ .obj / .ply / .vtk / .stl / .ent / .brk / .pdb / .pse / .mol / .mol2 / .cif / .u3d / .dcm / .nii] - .pse is not supported
Contributors: Gerold Baier, Thomas Hermann, Ulrich Stephani
Oscillator frequency is 200Hz for channel T4, and 300Hz for channel F4 ... To introduce a sound synthesis tool for human EEG rhythms that is applicable in real time.
Contributors: Masaki Ieda, Ji-Dong Fu, Paul Delgado-Olguin, Vasanth Vedantham, Yohei Hayashi, Benoit G. Bruneau, Deepak Srivastava
oscillation with lower frequency. The Rhod-3 intensity trace is shown....oscillation with varying frequency (A), similar to neonatal cardiomyocytes...oscillation observed in the TTF-derived α-MHC-GFP+ iCMs with Rhod-3 at...Oscillations Were Observed in the TTF-Derived iCMs, Related to Figure ...Oscillations Were Observed in the CF-Derived iCMs, Related to Figure 6D ... The reprogramming of fibroblasts to induced pluripotent stem cells (iPSCs) raises the possibility that a somatic cell could be reprogrammed to an alternative differentiated fate without first becoming a stem/progenitor cell. A large pool of fibroblasts exists in the postnatal heart, yet no single “master regulator” of direct cardiac reprogramming has been identified. Here, we report that a combination of three developmental transcription factors (i.e., Gata4, Mef2c, and Tbx5) rapidly and efficiently reprogrammed postnatal cardiac or dermal fibroblasts directly into differentiated cardiomyocyte-like cells. Induced cardiomyocytes expressed cardiac-specific markers, had a global gene expression profile similar to cardiomyocytes, and contracted spontaneously. Fibroblasts transplanted into mouse hearts one day after transduction of the three factors also differentiated into cardiomyocyte-like cells. We believe these findings demonstrate that functional cardiomyocytes can be directly reprogrammed from differentiated somatic cells by defined factors. Reprogramming of endogenous or explanted fibroblasts might provide a source of cardiomyocytes for regenerative approaches.
Contributors: Marianne Latinus, Phil McAleer, Patricia E.G. Bestelmeyer, Pascal Belin
frequency (Hz); Std(f0), SD of f0 (Hz); Int, intonation measured as the...frequencies of the first three formants at onset of phonation (left side...oscillate periodically generating a buzzing sound with a fundamental frequency...time-frequency landmarks put in correspondence across stimuli during averaging...frequencies of the first four formants (Hz); FD, formant dispersion (Hz ... Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA)  involved in an acoustic-based representation of voice identity [2–6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7–11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing . Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity.
Article - Massive Genomic Rearrangement Acquired in a Single Catastrophic Event during Cancer Development
Contributors: Philip J. Stephens, Chris D. Greenman, Beiyuan Fu, Fengtang Yang, Graham R. Bignell, Laura J. Mudie, Erin D. Pleasance, King Wai Lau, David Beare, Lucy A. Stebbings
oscillations between two copy number states. These genomic hallmarks are...oscillates between a copy number of 1 and 2, demarcated by back-and-forth ... Cancer is driven by somatically acquired point mutations and chromosomal rearrangements, conventionally thought to accumulate gradually over time. Using next-generation sequencing, we characterize a phenomenon, which we term chromothripsis, whereby tens to hundreds of genomic rearrangements occur in a one-off cellular crisis. Rearrangements involving one or a few chromosomes crisscross back and forth across involved regions, generating frequent oscillations between two copy number states. These genomic hallmarks are highly improbable if rearrangements accumulate over time and instead imply that nearly all occur during a single cellular catastrophe. The stamp of chromothripsis can be seen in at least 2%–3% of all cancers, across many subtypes, and is present in ∼25% of bone cancers. We find that one, or indeed more than one, cancer-causing lesion can emerge out of the genomic crisis. This phenomenon has important implications for the origins of genomic remodeling and temporal emergence of cancer.
Research paper - Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration
Contributors: Christopher I. Petkov, Mitchell L. Sutter
time-frequency plots) of different auditory stimuli and how they relate...frequency sensitivity suggesting that the underlying primary auditory ...oscillations in central portions of the right AC appear to be relevant ... Auditory perceptual ‘restoration’ occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of ‘auditory scene analysis’ to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings.