Higher-Order Musical Temporal Structure in Birdsong

Published: 20 January 2021| Version 2 | DOI: 10.17632/pkrvf77by8.2


Supporting data for the manuscript "Higher-Order Musical Temporal Structure in Birdsong" (Bilger et al. 2021). Further explanatory information can be found in the Methods section of the manuscript. "Bird Song Musicality Recordings (Bilger et al. 2021)" includes all stimuli audio files used in the described psychophysical survey. "Bird Song Musicality Survey Responses (Bilger et al. 2021)" includes the processed survey dataset used in the final statistical analysis.


Steps to reproduce

High-quality digital audio files of bird songs with minimal background noise were edited using the program Audition CC (Adobe Audition CC, Adobe Systems, San Diego, CA, USA). First, each note or syllable was split into a separate audio file. Each file began at the exact onset of the sound and ended just before the onset of the next note or syllable (i.e. gaps between syllables were grouped with the previous syllable). Editing was done visually following ten Cate and Okanoya (2012). Thus, edited audio files varied in length with the length of the note or syllable from ~6 milliseconds to longer than 1s. Some recordings included multiple songs; in these cases, notes/syllables were randomized within individual songs and the periods of silence between songs were preserved. 2s of relative silence from the original source recording was added before the beginning and after the ending of each song in order to normalize the presentation of stimuli. Envelopes of approximately 0.2 s were applied to each note/syllable file to reduce boundary artifacts upon recombination. Audition’s spectral editing tool was used to decrease background noise, normalize the recordings and remove unwanted sonic artifacts (other bird vocalizations, environmental noise, etc.). After each bird song was edited into its component notes/syllables, the edited audio files were recombined into two versions: one in original temporal order, and another with random temporal order. The new recordings were then reviewed once more and converted to mp3 files for uploading onto the online survey platform. A psychophysical survey for human subjects was created using the online platform Qualtrics. The subjects were presented with original sequence and temporally randomized recordings from the twenty bird species under review. Two recordings of human music (excerpts of solo fiddle and banjo performances) were manipulated in a similar manner and presented along with the bird song recordings as a control for subject attentiveness or perverse responses. The order of species/human music presentation was randomized, as was the order of natural vs. manipulated stimuli within species. Subjects were required to start the playback of each recording themselves, and a timer was implemented so that it would not be possible to advance to the next sample until the subject had time to listen to the entire recording. All subjects provided personal demographic information including: gender; whether they were hearing impaired; whether they had experience identifying wild birds by song; whether they had ever owned pet birds; and how much prior musical experience they had. All subjects were 18 years or older. Subjects were recruited using Amazon’s crowdsourcing marketplace Mechanical Turk, and paid a small fee for their participation. We only accepted subjects who had had at least 90% of their previous MTurk tasks approved, and precautions were taken to prevent individuals from taking the survey more than once.


Yale University


Survey, Audio Recording