Systems and methods are described for co-registering, displaying and quantifying images from numerous different medical modalities, such as CT, MRI and SPECT. In this novel approach co-registration and image fusion is based on multiple user-defined Regions-of-Interest (ROI), which may be subsets of entire image volumes, from multiple modalities, where the each ROI may depict data from different image modalities. The user-selected ROI of a first image modality may be superposed over or blended with the corresponding ROI of a second image modality, and the entire second image may be displayed with either the superposed or blended ROI.