Melissa Monti

and 3 more

Several theories have been proposed about the default configuration of the brain’s networks underlying unisensory and multisensory processing abilities and the development of multisensory integration during childhood. Recent empirical findings from animal models and behavioral data collected from typically developing (TD) children and children with autism spectrum disorder (ASD), however, are consistent with the idea that in the immature brain, prior to systematic cross-sensory exposures typically encountered in everyday life, that the individual sensory systems interact in a competitive manner. Which neural architecture and mechanisms best describe the brain’s naïve configuration are still unknown. To fill this gap, this study investigates how sensory modalities interact in the young brain by comparing the predictions of two alternative biologically plausible neuro-computational models to empirical data. The neural substrates responsible for the altered development of multisensory integrative processes observed in ASD children are also investigated. Linking the framework suggested by empirical data to a plausible neural implementation, our results challenge the classical notion of cross-sensory brain organization at birth, whereby the various sensory pathways do not initially interact. Instead, we suggest that direct inhibitory interactions between sensory modalities are taking place in the immature brain, and we suggest that these inhibitory interactions play a crucial role in the altered multisensory perceptual abilities of children with autism.

CRISTIANO CUPPINI

and 3 more

The accurate perception of audio-visual stimuli heavily relies on the spatial and temporal alignment of the sensory cues, with multisensory enhancement only occurring if those cues are presented in spatiotemporal congruency. While spatial localization processing and temporal binding regulation of audiovisual information have been deeply investigated separately, many of the neural correlates subtending audiovisual interactions in spatiotemporally varying conditions remain unclear. Empirically evaluating the respective contribution of spatial and temporal discrepancies on behavioral responses may be challenging when they vary simultaneously. Here, we sought to investigate the mutual interaction of temporal and spatial offsets in cue presentation on the neural processing of audiovisual cues. To this end, we developed a biologically inspired neurocomputational model that reproduces behavioral evidence of perceptual phenomena observed in audiovisual tasks, i.e. the modality switch effect (temporal realm), and the ventriloquist effect (spatial realm). Tested against the race model, our network proved also able to successfully simulate multisensory enhancement due to the concurrent presentation of audiovisual cues in reaction times. Further investigation on the mechanisms implemented in the network upheld the centrality of cross-sensory inhibition in explaining Modality Switch Effects, and of cross-modal and lateral intra-area connections in regulating spatial localization, respectively. Finally, the model predicts an amelioration in temporal detection of different modality stimuli with increasing between-stimuli eccentricity, and indicates a plausible reduction in auditory localization bias for increasing inter-stimulus-interval between spatially disparate cues.