IST COLLOQUIUM 2026

Autism and the Specialized Visuomotor Brain

講演者の画像

Erez Freud 🌐

Dept. of Psychology, York University, Canada
講演者経歴 Dr. Erez Freud is an Associate Professor of Psychology at York University and York Research Chair in Visual Cognitive Neuroscience. His research examines how the human brain transforms visual information into purposeful actions, combining motion tracking, neuroimaging, and computational analyses. Through his lab’s studies of typical development, aging, and autism, Dr. Freud investigates how perceptual and motor systems become specialized and what happens when this specialization is reduced. His work bridges basic neuroscience with clinical and community applications, advancing our understanding of perception–action relationships across the lifespan.

日時: 4月16日(木) 12:10-13:10

場所:京都大学文学研究科 ぶんこも地下多目的スペース

The human visuomotor system is characterized by a remarkable degree of functional specialization- between pathways dedicated to perception and action and between hemispheres governing lateralized motor control. But how does this specialization develop, and what conditions does it require? Autism, a neurodevelopmental condition characterized by altered sensory, motor, and social processing, may offer a powerful window into these questions. In this talk, I present converging evidence from three lines of research examining visuomotor behavior in autistic and non-autistic adults. First, using a naturalistic LEGO-building task, we show that autistic individuals exhibit reduced hand lateralization and more idiosyncratic movement trajectories during free object manipulation, revealing that reduced specialization manifests spontaneously, without experimental provocation. Second, using controlled grasping paradigms with visual illusions and stimulus range manipulations, we demonstrate a reduced functional dissociation between perception and action in autism: contextual information that typically influences only perceptual judgments leaks into visuomotor computations, suggesting impaired specialization of the dorsal visual pathway. Third, using a novel dyadic action-prediction task, we show that this reduced specialization extends to the social visuomotor domain, with autistic individuals exhibiting slower, more variable motor responses regardless of their partner's diagnostic identity.
Across all three studies, increased behavioral variability emerges as a consistent signature of reduced specialization. Together, these findings suggest that the visuomotor system is exquisitely sensitive to typical neural maturation and visual experience, and that autism disrupts the developmental trajectory through which specialization normally emerges. This line of research sheds light on the developmental conditions that shape functional specialization in the human visuomotor system.

Minimalist Sensing for Computer Vision

講演者の画像

Jeremy Klotz 🌐

Dept of Computer Science, Columbia University, USA
講演者経歴 Jeremy Klotz is a fourth-year Ph.D. student in the Computer Science department at Columbia University, advised by Shree Nayar. He received his BS and MS from CMU in Electrical and Computer Engineering. His research explores visual sensing methods that capture the least information necessary to solve a task. He is supported by an NDSEG fellowship, and his work received the Best Paper Award at ECCV 2024.

日時: 4月8日(水) 13:15-14:45(Joint Talk 1/2)

場所:総合研究7号館 情報3講義室(1階 104)

Conventional cameras produce high resolution images using millions of pixels. As a result, they make significantly more measurements than needed to solve lightweight vision tasks. I will present the minimalist camera, which uses a small number of “freeform pixels” whose shapes are automatically designed to be most information rich for the task at hand. We show that a minimalist camera can be used to monitor an indoor space with 6 pixels, estimate traffic flow with 8 pixels, and compute robot odometry with 4 pixels. Since a minimalist camera uses a very small number of measurements (freeform pixels), it preserves privacy and can be fully powered using just the light falling on it.

Next, I will present an “irradiance camera,” which, for any environmental illumination, measures the irradiance incident on every point on a sphere. We show that this irradiance function can be accurately estimated using just 49 detectors. Since the number of measurements are small, we show that the camera can produce video of the irradiance function while being entirely self-powered. We conclude with our plans to use the camera to compute egomotion, solve lightweight vision tasks, and estimate sky and weather conditions.

Material Perception from Appearance and Touch

講演者の画像

Matthew Beveridge 🌐

Dept. of Computer Science, Columbia University, USA
講演者経歴 Matthew Beveridge is a third-year PhD student in the Computer Science Department at Columbia University, advised by Shree Nayar. He received his BS and MEng. degrees from MIT in Electrical Engineering and Computer Science. His research focuses on understanding the material properties of our lived environment and developing autonomous systems that leverage this knowledge.

日時: 4月8日(水) 13:15-14:45(Joint Talk 2/2)

場所:総合研究7号館 情報3講義室(1階 104)

Our ability as humans to recognize materials is critical to every action we take. Using vision alone, we can infer whether an object will be heavy or light, rough or smooth, and even rigid or soft -- each of which determines how we interact with the object. I will present an approach to material recognition that leverages a taxonomy of materials, which is arranged by shared mechanical properties. Our recognition model explicitly wires hierarchical relationships between materials to achieve higher performance. Due to the hierarchical nature of our approach, we can recognize materials and their properties at different levels of specificity depending on the context and confidence.

While appearance conveys class-level properties of a material, touch can reveal instance-level properties. In the second part of my talk, I will present how we enable tactile robotic systems to perceive materials in real time. We show that, through simple tactile signals, we can recover the mechanical properties of an object while grasping it and adjust the force we are using to grasp it. This allows us to use the minimum force required to grasp and lift the object, thereby mitigating the risk of damage. We conclude by showing how our approach can be used to differentiate and sort objects, for example, arranging avocados by their level of ripeness.

2025年の講演はこちら>>