NEURAL SYSTEMS COMPUTATIONAL MODELS   WHALE BIOACOUSTICS NEURAL SYSTEMS INVOLVED IN DISCRIMINATION LEARNING
Neural systems thought to be involved in auditory discrimination learning include the cerebellum, the amygdala, the basal forebrain, the hippocampus, the basal ganglia, the ventral tegmental area, and of course, the auditory pathways from the cochlea to the auditory cortex. Clearly many neural systems play a role in discrimination learning, but how these systems interact to mediate increases in performance remains unclear. Behavioral, neuroimaging, and electrophysiological evidence suggests that how stimuli are represented in auditory cortex depends on experience, that auditory cortical representations remain plastic in adults, and that changes in how stimuli are represented can be induced in a variety of ways. Hearing was the first sensory modality to be successfully rehabilitated using electrical neurostimulation. Electrical stimulation of auditory cortex facilitates experience-dependent shifts in auditory sensitivities as does stimulation of neuromodulatory neurons during the presentation of sounds. How such changes affect perceptual abilities is not yet known. Some researchers have found that experience-induced changes in cortical responses to pure tones may be uncorrelated with changes in frequency discrimination abilities. However, other experiments involving more complex sounds have shown robust correlations between changes in cortical sensitivities and increases in performance. Clarifying the effects of neurostimulation and behavioral training on the perceptual encoding of acoustic events will help reveal how these techniques can best be used to remediate cortical deficits resulting from neurophysiological damage or dysfunction. Results from ongoing experiments can also contribute further to the creation of general theoretical descriptions and computational models of the neural systems involved in learning and memory. Understanding the neural bases of discrimination learning and how neurostimulation can be used to enhance the construction and maintenance of stimulus representations are the primary goals of this research project. COMPUTATIONAL MODELS OF AUDITORY CORTICAL PROCESSING
The chirplet transform retains the advantages offered by time-frequency and wavelet transforms, and additionally provides a natural way for characterizing the different types of processing that have been described for different auditory fields (cortical regions with systematically related response sensitivities). Each auditory field can be viewed as a processor for decomposing sounds within a particular subspace of the complete auditory parameter space. In the current model, these fields correspond either to chirplet subspaces or to chirplet spaces generated by sets of functionally relevant basis functions. Chirplet spaces are highly overcomplete (redundant) because there are an infinite number of ways to segment a time-frequency plane. Because of this overcompleteness, the same acoustic feature may be encoded multiple times. Such overcomplete encoding corresponds well with the overlapping, parallel signal processing pathways observed in the mammalian auditory cortex. Self-organizing maps are a type of neural network with a highly flexible architecture that can be easily customized. Self-organizing maps with biologically-based response characteristics can emulate the spatial organization of response properties observed in auditory cortex, as well as the competitive adaptation currently theorized to underlie changes in auditory cortical organization. As noted above, cortical representations of sound can be modified by repeatedly pairing presentation of a sound with electrical stimulation of neuromodulatory neurons. Stimulation-induced auditory plasticity can be simulated using parameters intrinsic to the self-organizing map such as the learning rate (controlling the adaptability of map nodes), and the neighborhood function (controlling the excitability of map nodes). There are numerous ways to computationally model the processes involved in auditory perception and learning. Connectionist models are convenient tools for testing the adequacy of qualitative explanations of how brains process sound as well as for generating new hypotheses. Neural network and signal processing models are useful for instantiating process-level models of neural and cognitive functions, but less so for emulating neural circuits. My goal as a modeler is to provide a concise and precise account of how mammals represent acoustic events. HUMPBACK WHALE BIOACOUSTICS
Humpback whales vary the acoustic features of song sounds, and the sequential structure of songs over time. Whales within a particular area appear to match their songs to the songs of other whales that they have heard singing. Because humpback whales are continuously changing the acoustic properties of their songs based on biologically-relevant acoustic events that they have experienced (i.e., songs produced by other whales), they must have highly flexible sound production capabilities, as well as exceptional auditory learning and memory skills. To be able to emulate spectrotemporally complex sounds that are novel and potentially distorted by propagation, humpback whales must possess an auditory system that can encode novel sounds precisely enough that they later can be reproduced. Few mammals other than humans and cetaceans have such sophisticated auditory processing capabilities. What is it about humpback whales and humans that allows them to use sounds so flexibly? Surely it is their brains, but the specific neural and cognitive mechanisms that give rise to these unique abilities are not yet known. Comparative studies of auditory learning and plasticity provide a broad perspective from which to answer questions about how mammals process sound. My goal as a bioacoustician is to clarify how whales and dolphins produce, receive, and use sound, and to determine how similar the auditory processing techniques used by cetaceans are to those used by other mammals, including humans. |