Multimodal Vision Research Laboratory

MVRL

talk at Carnegie Mellon University (Pittsburgh, PA)

27 October 2015

I visited Srinivasa Narasimhan at CMU. You should check out his recent work on energy-efficient illumination and lighting and programmable automotive headlights. While I was there I gave the following talk (which was basically a "greatest hits" for one area of my research):

Novel Cues for Geocalibration: Cloudy Days, Rainbows, and More

Every day billions of images are uploaded to the Internet. Together they provide many high-resolution pictures of the world, from panoramic views of natural landscapes to detailed views of what someone had for dinner. This imagery has the potential to drive discoveries in a wide variety of disciplines, from environmental monitoring to cultural anthropology. Significant research progress has been made in automatically extracting information from such imagery. One of the key remaining challenges is that we often don’t know where an image was captured and usually know very little about other geometric properties of the camera, such as orientation and focal length. In other words, most images are not geocalibrated. This talk provides an overview of my work on using novel cues, including partly cloudy days, rainbows, and human faces, to geocalibrate Internet imagery and video.