Visual Thinking in Autism and in AI Systems

Conférence NeuroQAM, en collaboration avec l'Institut des sciences cognitives

Conférencière invitée :

Maithilee Kunda (https://my.vanderbilt.edu/mkunda/), Department of Electrical Engineering and Computer Science, Vanderbilt University.

Jeudi 6 avril 2017, 15h00
Université du Québec à Montréal
Local SU-1550
Pavillon Adrien-Pinard (SU)

100, rue Sherbrooke Ouest, Montréal

Résumé :

Despite evidence for the importance of visual mental imagery from many of the cognitive sciences, the field of artificial intelligence (AI) has not yet provided a rigorous computational account of how mental imagery works. Part of the problem comes from confusion in the AI literature between tasks that are presented visually, in an external visual format, versus tasks that are solved visually, using internal visual representations. There is a rich history of AI research in the first category, but the vast majority of these AI systems first convert visual inputs into internal propositional (i.e. abstract/symbolic) representations before solving a task. Fewer systems fall into the second category, but these studies have begun to provide insight into the computational nature of mental imagery and its role in intelligent behavior. I will present a synthesis of AI research into mental imagery over the years, including my recent work in developing AI systems that use purely visual representations to solve problems from standardized intelligence tests, as well as what these systems can tell us about visual mental imagery in typical development and in cognitive conditions such as autism.