日時: 4月8日(水) 13:15-14:45(Joint Talk 1/2)
場所:総合研究7号館 情報3講義室(1階 104)
Conventional cameras produce high resolution images using millions of pixels.
As a result, they make significantly more measurements than needed to solve lightweight vision tasks.
I will present the minimalist camera, which uses a small number of “freeform pixels” whose shapes are automatically designed to be most information rich for the task at hand. We show that a minimalist camera can be used to monitor an indoor space with 6 pixels, estimate traffic flow with 8 pixels, and compute robot odometry with 4 pixels.
Since a minimalist camera uses a very small number of measurements (freeform pixels), it preserves privacy and can be fully powered using just the light falling on it.
Next, I will present an “irradiance camera,” which, for any environmental illumination, measures the irradiance incident on every point on a sphere. We show that this irradiance function can be accurately estimated using just 49 detectors.
Since the number of measurements are small, we show that the camera can produce video of the irradiance function while being entirely self-powered.
We conclude with our plans to use the camera to compute egomotion, solve lightweight vision tasks, and estimate sky and weather conditions.
日時: 4月8日(水) 13:15-14:45(Joint Talk 2/2)
場所:総合研究7号館 情報3講義室(1階 104)
Our ability as humans to recognize materials is critical to every action we take.
Using vision alone, we can infer whether an object will be heavy or light, rough or smooth, and even rigid or soft -- each of which determines how we interact with the object. I will present an approach to material recognition that leverages a taxonomy of materials, which is arranged by shared mechanical properties.
Our recognition model explicitly wires hierarchical relationships between materials to achieve higher performance.
Due to the hierarchical nature of our approach, we can recognize materials and their properties at different levels of specificity depending on the context and confidence.
While appearance conveys class-level properties of a material, touch can reveal instance-level properties. In the second part of my talk, I will present how we enable tactile robotic systems to perceive materials in real time. We show that, through simple tactile signals, we can recover the mechanical properties of an object while grasping it and adjust the force we are using to grasp it.
This allows us to use the minimum force required to grasp and lift the object, thereby mitigating the risk of damage. We conclude by showing how our approach can be used to differentiate and sort objects, for example, arranging avocados by their level of ripeness.