Our generative AI research pushes the boundaries of how AI synthesizes and represents information across diverse sensors and scales. Recent work includes open-world generation of stereo images with unsupervised matching (GenStereo), consistent text-to-360 scene generation (PanoDreamer), and generative-free 3D scene recovery for occlusion removal (DeclutterNeRF). We also develop methods for generating detailed synthetic captions for composed image retrieval, fine-grained satellite image synthesis with structured semantics (VectorSynth), and zero-shot soundscape mapping from satellite imagery (Sat2Sound). Our research spans from geospatially guided diffusion for mixed-view panorama synthesis to diffusion-guided visual active search in partially observable environments.