Contributors: Bergelson, Elika, National Institutes of Health (NIH)
... Abstract These files are part of our longitudinal study, Study of Environmental Effects on Developing Linguistic Skills (SEEDLingS). This volume only includes recordings taken at 6 months of age. The recordings in this volume were analyzed for the Bergelson & Aslin citation above, alongside eyetracking data. (the code and eyetracking data for the paper will be shared via github link below once PNAS embargo is lifted). The broader project is described below: SEEDLingS is a project exploring how infants' early linguistic and environmental input plays a role in their learning. We focus on understanding how babies learn words between 6 and 18 months of age from the visual, social, and linguistic world around them. By looking at the complex environment that babies are exposed to, from their perspective, we can attempt to decode how the developing mind interprets and organizes the objects and words it faces. SEEDLingS is unique in that it combines well-controlled studies in the lab that assess what words infants know, with in-the-home audio and video recordings of what words infants hear, and what they see when they hear these words. Video and audio recordings were generated in the home every month, from 6 to 17 months of age, for a set of 44 infants. The goal of this study is to assess infants' language growth over this time period, particularly in the word learning domain. Every two months, infants came into the lab for an eye-tracking study to test their word comprehension (and for older infants, their word production). This volume includes the audio and video recordings from 6 month home visits. Corresponding test dates for each audio and video recording are included as a supplementary spreadsheet, which can be accessed in the materials folder of this volume. The day-long audio recordings were generated using child-perspective LENA recorders (LENA Research Foundation, Boulder, Colorado, United States) worn by the infant. The audio recordings are generated from one single LENA audio recording, converted from LENA's propriety algorithmic output (.its) for annotation in CHA format. The hour-long video recordings show a composite view of infants' typical lives with 1-4 camera feeds. In the standard setup, infants are equipped with 2 headcams, and a centralized camcorder that captures the entire room. The precise arrangement and number of cameras varies per video, as a function of whether the child would wear the hat with the cameras, and whether the cameras' files became corrupt during the recordings. Shared files have been scrubbed for certain personal information (e.g. full names, addresses, etc.); this leads to some silent periods on the audio track and some black-out periods on the video track. Only sections of the files that have been verified to contain no extremely personal content by human listeners (or from which such info has been scrubbed) are shared here. If you notice anything that you believe we may have missed in terms of personal information, please contact us as soon as possible so we can rectify the issue. Infants in this sample are from the upstate New York area. The sample is generally middle class, with a range of income and an above-average maternal education level. The sample is predominantly white. All infants heard majority English at home (>75%) and had no known vision or hearing issues at birth. Please contact Elika Bergelson directly to discuss further aspects of the sample design, annotation, and analysis at email@example.com These data were collected at the University of Rochester, and continue to be analyzed presently at Duke University. Further details of the project are available on our website, wiki, and GitHub repo, linked below.
Excerpt volume: Illustrations from motor development of Siegler's "Overlapping Waves Model" of strategy choice
Contributors: Adolph, Karen, National Science Foundation (NSF), National Institute of Child Health and Human Development (NICHD)
... Abstract Siegler's "Overlapping Waves Model" proposes that strategies enter children's repertoires at different times and change in frequency over development as behavior becomes more adaptive and functional. Here, we illustrate the Overlapping Waves Model with video clips from several studies in infant and child motor development. (See links for specific papers relevant to the excerpts). Illustration #1: Infants' use of various strategies for descending impossibly steep slopes (backing feet first, sliding head first, sitting, crawling, walking, avoiding descent). Longitudinal observations show that strategies entered infants' repertoires at various ages and changed in frequency as infants became more accurate at detecting affordances for descent over weeks of crawling and walking. See links for Illustration #1-Longitudinal. Cross-sectional observations show that individual infants use multiple strategies within a single session. See links for Illustration #1-Cross-sectional. Illustration #2: Walking infants use multiple strategies for descending drop-offs (backing feet first, sitting, crawling, walking, avoiding descent). See links for Illustration #2-Drop-offs. Illustration #3: 4-year-olds use multiple strategies for pounding a peg with a hammer (radial grip, ulnar grip, two-handed grip, etc.). See links for Illustration #3-Hammering.
Contributors: Bahrick, Lorraine E.
... Abstract IPEP Description: The Intersensory Processing Efficiency Protocol (IPEP) is a novel protocol designed to assess fine-grained individual differences in the speed and accuracy of intersensory processing for audiovisual social and nonsocial events. It is appropriate for infants (starting at 3 months of age) as well as children and adults. The IPEP is an audiovisual search task requiring visually locating a sound-synchronized target event amidst 5 asynchronous distractor events. Visual attention is typically monitored using an eyetracker. This protocol indexes intersensory processing through 1) accuracy in selection (frequency of fixating the target), 2) accuracy in matching (duration of looking to target), and 3) speed in selection (latency to fixate the target) to social and nonsocial events (see highlight video). IPEP Method: The protocol consists of 48 8-s trials composed of 4 blocks (2 social, 2 nonsocial)—alternating between social and nonsocial blocks of 12 trials each. On each trial, participants view a 3 x 2 grid of 6 dynamic visual events along with a natural synchronized soundtrack to one (target) event. The social events consist of 6 faces of women all reciting different stories using positive affect. The nonsocial events consist of objects striking a surface in an erratic temporal pattern creating percussive sounds. On each trial, the natural synchronized soundtrack to the target face or object is played for 8 s. Trials are preceded by a 3-s attention-getter (looming and contracting smiley face). IPEP Measures: Proportion of total trials on which the target was fixated (PTTF, accuracy in selection), proportion of total looking time to the target (PTLT, accuracy in matching), and latency to fixate the target event (RT; speed in selection) are derived from eyetracking (offline coding schemes are being developed). The highlight video below presents 3 exemplar trials from each condition (social, then nonsocial) in the IPEP with the attention getter preceding each trial. Note: the precision of audiovisual synchrony viewed in these examples may vary depending on internet connection type and available bandwidth and may not reflect the actual temporal synchronies. The full protocol is played using custom-designed Matlab software which interfaces with eye-tracking software (Tobii Studio).
Contributors: Kibbe, Melissa, Kaldy, Zsuzsa
... Abstract What drives infants’ attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets – defined by different features, but each equally salient when evaluated independently– would drive attention equally when pitted head-to-head. In Experiment 1, we presented infants with arrays of Gabor patches in which a target region varied either in color (hue saturation) or spatial frequency (cycles per degree) from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency that were equally salient (preferred), and pit them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.
Synthetic contrast- and spatial frequency-normalized grayscale patterns representing 2D wallpaper groups
Contributors: Norcia, Anthony M., Gilmore, Rick O., LIU, YANXI, National Science Foundation (NSF)
Contributors: Bahrick, Lorraine E.
... Abstract MAAP Description: The Multimodal Attention Assessment Protocol (MAAP) is a novel procedure designed to assess individual differences in multiple components of attention to dynamic, audiovisual social and nonsocial events within a single session, derived from standard visual preference procedures and gap-overlap tasks. It is appropriate for infants and children starting at 3 months of age. The MAAP integrates into a single test, measures of three fundamental “building blocks” of attention that support the typical development of social and communicative functioning. The protocol indexes 1) duration of looking (attention maintenance), 2) speed of attention shifting, and 3) accuracy of intersensory matching to audiovisual events in the context of high competition (an irrelevant, distractor event is present) or low competition (no distractor event present). It thus provides 6 measures of attention—duration, speed, and accuracy in high vs. low competition—to social and nonsocial events. The difference between performance under high and low competition conditions reflects the cost of competing stimulation on each measure of attention (see highlight video). MAAP Method: The protocol consists of 24 13-s trials composed of 2 blocks of 12 trials: one block of social (women speaking with positive affect) and one block of nonsocial (objects being dropped into a clear container) events. On each trial, a 3-s central visual stimulus (dynamic geometric patterns) is followed by two lateral, dynamic video events for 12 s—one in synchrony with an accompanying natural soundtrack and the other out of synchrony. On half of the trials, the central visual event remains on while the lateral events are presented, providing additional competing stimulation (high competition trials) and on half of the trials the central visual event disappears as soon as the laterals events appear (low competition trials). Participants are videotaped and/or coded live by trained observers, blind to the lateral positions of the events. MAAP Measures: Duration of looking (proportion of available looking time [PALT] spent fixating either lateral event), speed of attention shifting (reaction time [RT] to shift attention to either lateral event), and accuracy of intersensory matching (proportion of total looking time [PTLT] to the sound-synchronous lateral event) are assessed under high and low competition to social and nonsocial events. The highlight video below presents 2 exemplar trials from each condition in the MAAP: 1) low competition social, 2) high competition social, 3) low competition nonsocial, 4) high competition nonsocial. Note: the precision of audiovisual synchrony viewed in these examples may vary depending on internet connection type and available bandwidth and may not reflect the actual temporal synchronies. The full protocol is played using custom-designed Matlab software.
Contributors: Bergelson, Elika, Warlaumont, Anne, Cristia, Alejandrina, Rowland, Caroline, Casillas, Marisa, Rosemberg, Celia, Soderstrom, Melanie, Metze, Florian, Dupoux, Emmanuel, Rasanen, Okko
... Abstract This is the 'starter' set for the ACLEW project
Instructions in Mathematical Equivalence with Gesture and No Gesture Matched for Audio Content, Body Position and Eye Gaze
Contributors: Cook, Susan Wagner, National Science Foundation (NSF)
... Abstract These videos are being used as stimuli in ongoing work. They were constructed from live recordings using Final Cut Pro to alter the timing of the videos to perfectly align the audio.
Contributors: Kukke, Sahana Nalini
... Abstract Preliminary assessment of infant grasp development