Our psychophysical evaluation of our CNN-based texture model is now available on bioRxiv. In the study led by Tom Wallis, we compared our recent parameteric model of texture appearance (CNN model) that uses the features encoded by a deep convolutional neural network (VGG-19) with two other models: the venerable Portilla and Simoncelli model (PS) and an extension of the CNN model in which the power spectrum is additionally matched.
Matching CNN features captured appearance substantially better than the PS model under foveal inspection, and including the power spectrum improved appearance matching for some textures. Importantly, though, none of the models could produce indiscriminable images for one of the twelve textures we tested.
Interestingly, under peripheral viewing all models performed very well, but the PS model had a slight advantage over the CNN model.