Contributors:Carlos García Benito, Marta Alcolea, Carlos Mazo
In order to evaluate the musical performance of the most complete Gravettian aerophone recovered at the site of Isturitz, two experimental replicas were reproduced from the ulna of a griffon vulture (Gyps fulvus), each with a different way of elaborating their holes (boring and scraping). The operational chain that involves their manufacture is a sequence of single and simple actions that does not require the use of specialized tools. Its acoustics are explained by performing sound in three different ways in both replicas.
Contributors:Josef Bartels, Rachel Rodenbach, Katherine Ciesinski, Robert Gramling, Kevin Fiscella, Ronald Epstein
Silences in doctor-patient communication can be “connectional” and communicative, in contrast to silences that indicate awkwardness or distraction. Musical and lexical analyses can identify and characterize connectional silences in consultations between oncologists and patients.
This article considers what happens when sound is understood as affect. It begins by recounting a minor event in which sound moved my body. I use this as a starting point for defining sonic affect as the vibrational movement of bodies of all kinds, moving away from anthropocentric notions of sound based on human perception. The vibration of bodies can be understood as a ‘base layer’ of sound, which may activate or accrue layers of feeling, significance and meaning, but which is not reducible to them. Developing this conceptualisation of sonic affect, I argue that: (i) there are repeating affective tendencies of sound, but these unfold differently in context; (ii) sonic affect exercises power over bodies, sometimes by combining with meaning; and (iii) sound propagates affect through space in distinctive ways, some of which I discuss. These arguments are grounded in numerous examples, reflecting the variety of both sound and affect.
Contributors:David J. Morris, Kurt Steinmetzger, John Tøndering
The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2Δ=1000Hz) and subtle (F2Δ=100Hz) diphthongs, while subjects (i) attended to the auditory stimuli, (ii) ignored the auditory stimuli and watched a film, and (iii) diverted their attention to a visual discrimination task. Responses elicited by diphthongs where F2 values rose and fell were found to be different and this precluded their combined analysis. Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude was significant for components from a broader temporal window which included P1 latency and N1 amplitude. N1 latency did not vary between attention conditions, a finding that may be related to stimulation with a continuous vowel. These data show that a discernible P1-N1-P2 response can be observed to subtle vowel quality transitions, even when the attention of a subject is diverted to an unrelated visual task.
How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and “visual” brain regions that typically support the processing of motion information.
Contributors:Avelyne S. Villain, Marie S.A. Fernandez, Colette Bouchut, Hédi A. Soula, Clémentine Vignal
The coordination of behaviours between mates is a central aspect of the biology of the monogamous pair bonding in birds. This coordination may rely on intrapair acoustic communication, which is surprisingly poorly understood. Here we examined the impact of an increased level of background noise on intrapair acoustic communication at the nest in the zebra finch, Taeniopygia guttata. We monitored how partners adapted their acoustic interactions in response to a playback of wind noise inside the nestbox during incubation. Both zebra finch parents incubate and use coordinated call duets when they meet at the nest. The incubating parent can vocalize to its partner either outside the nestbox (sentinel duets) or inside the nestbox (relief and visit duets), depending on the context of the meeting. Pairs use these duets to communicate on predation threats (sentinel duets), incubation duties (relief) and other nesting activities (visit duets). Each of these duets probably represents a critical component of pair coordination. In response to the noise playback, partners called less and more rapidly during visit and relief duets. Female and male calls were more regularly and precisely alternated during relief duets. Mates increased the number of visit duets and their spatial proximity during sentinel duets. Furthermore, both females and males produced louder, higher-frequency and less broadband calls. Taken together our results show that birds use several strategies to adjust to noise during incubation, underlining the importance of effective intrapair communication for breeding pairs.
Action-theoretic views of language posit that the recognition of others’ intentions is key to successful interpersonal communication. Yet, speakers do not always code their intentions literally, raising the question of which mechanisms enable interlocutors to exchange communicative intents. The present study investigated whether and how prosody—the vocal tone—contributes to the identification of “unspoken” intentions. Single (non-)words were spoken with six intonations representing different speech acts—as carriers of communicative intentions. This corpus was acoustically analyzed (Experiment 1), and behaviorally evaluated in two experiments (Experiments 2 and 3). The combined results show characteristic prosodic feature configurations for different intentions that were reliably recognized by listeners. Interestingly, identification of intentions was not contingent on context (single words), lexical information (non-words), and recognition of the speaker’s emotion (valence and arousal). Overall, the data demonstrate that speakers’ intentions are represented in the prosodic signal which can, thus, determine the success of interpersonal communication.
Contributors:Zhenghui Liu, Fan Zhang, Jing Wang, Hongxia Wang, Jiwu Huang
A content authentication and tamper recovery scheme for digital speech signal is proposed. In this paper, a new compression method for speech signal based on discrete cosine transform is discussed, and the compressed signals obtained are used to tamper recovery. One block-based large capacity embedding method is explored, which is used for embedding the compressed signals. For the scheme proposed, watermark is generated by frame number and compressed signal. If watermarked speech is attacked, the attacked frames can be located by frame number, and reconstructed by using the compressed signal. Theoretical analysis and experimental results demonstrate that the scheme not only improves the security of watermark system, but also can locate the attacked frames precisely and reconstruct the attacked frames.
An effective CO2 supply system of a spraying absorption tower combined with an outdoor ORWP (open raceway pond) for microalgae photoautotrophic cultivation is developed in this paper. The microalgae yield, productivity and CO2 fixation efficiency were investigated, and compared with those of bubbling method. The maximum yield and productivity of biomass were achieved 0.927gL−1 and 0.114gL−1day−1, respectively. The fixation efficiency of CO2 by microalgae with the spraying tower reached 50%, whereas only 11.17% for bubbling method. Pure CO2 can be used in the spraying absorption tower, and the flow rate was only about one third of the bubbling cultivation. It shows that this new method of quantifiable control CO2 supply can meet the requirements of the growth of microalgae cultivation on large-scale.