We develop methods for camera calibration and geometric understanding from images. Our research includes structure-aware methods for direct pose estimation, extending absolute pose regression to multiple scenes, and using natural cues like clouds, rainbows, and horizon lines for calibration. Recent work focuses on cross-view pose estimation, stereo matching for depth estimation, and calibration techniques that work with limited or challenging imaging conditions. We also explore calibration methods for webcam networks and time-lapse sequences, enabling accurate geometric understanding from distributed camera systems.