What can you learn about the world by looking at pictures uploaded to social networking websites? We explore the relationship between visual content and geographic location through large-scale analysis of images from social media platforms, including work on dynamic scene appearance patterns and geospatial understanding from user-generated content. Recent research includes learning probabilistic embeddings for multi-scale zero-shot soundscape mapping from social media and satellite data (PSM), mapping fine-grained textual descriptions from satellite images (Sat2Cap), and interactive event sifting using Bayesian graph neural networks. We also work on revisiting image geolocalization in the deep learning era, learning geo-temporal image features, and understanding natural beauty through large-scale social media analysis.