Welcome

I am a computational neuroscientist based in Tübingen, Germany.

My goal is to understand how neural systems – both biological and artificial – perform visual perception.

Learn more about my research and my publications.

Advertisements

ECCV paper on visualization of invariances in convnets

featurevis.pngThe final version of our ECCV 2018 paper on visualizing invariances in convolutional neural networks is available. We find that early and mid-level convolutional layers in VGG-19 exhibit various forms of response invariance: near-perfect phase invariance in some units and invariance to local diffeomorphic transformations in others. At the same time, we uncover representational differences with ResNet-50 in its corresponding layers.

Work with Santiago Cadena, Marissa Weis, Leon Gatys and Matthias Bethge.

Continue reading

New paper on the effect of attentional fluctuations on correlated variability in V1

Variability in neuronal responses to identical stimuli is frequently correlated across a population. Attention is thought to reduce these correlations by suppressing noisy inputs shared by the population. However, even with precise control of the visual stimulus, the subject’s attentional state varies across trials. In 2016, we put out the hypothesis that such fluctuations in attentional state could be a cause for some of the correlated variability observed in cortical areas. To address this question empirically, we designed a novel paradigm that allows us to manipulate the strength of attentional fluctuations.

In the new paper just published in Nature Communications, we recorded from monkeys’ primary visual cortex (V1) while they were performing this task. We found both a pronounced effect of attentional fluctuations on correlated variability at long timescales and attention-dependent reductions in correlations at short timescales. These effects predominate in layers 2/3, as expected from a feedback signal such as attention.

41467_2018_5123_Fig2_HTML.jpg

Paper on one-shot segmentation at ICML

Our paper on one-shot segmentation in clutter has been accepted to ICML. In this paper, we tackle a one-shot visual search task: based on a single instruction example (the red Φ in the image below), the goal is to find the same letter in a cluttered image that consists of many letters (left) and segment it. This task is pretty hard for computer vision systems, because the image clutter consists of other letters (i.e. very similar statistics), the letters can have arbitrary colors, are drawn by different people, transformed by affine transformations, and have not been seen during training.

MaskNet

Continue reading

Welcome Mara & Max

Marissa Weis and Max Günthner have started their Master’s thesis projects on March 1st. Mara will be working on image processing using foveated image representations. Max will be investigating nonlinearities in neural responses in primary visual cortex using techniques to visualize convolutional neural networks.

Review on texture and art with deep neural networks

Our review on “Texture and Art with Deep Neural Networks” (free version) has been published online and will appear in the October issue of Current Opinion in Neurobiology.

In the review, written by Leon Gatys, Matthias Bethge and myself, we discuss recent advances in texture synthesis using Convolutional Neural Networks (CNNs) that were motivated by visual neuroscience and have led to a substantial advance in image synthesis and manipulation in computer vision. We also discuss how these advanecs can in turn inspire new research in visual perception and computational neuroscience.

Preprint of paper on human texture perception available

Our psychophysical evaluation of our CNN-based texture model is now available on bioRxiv. In the study led by Tom Wallis, we compared our recent parameteric model of texture appearance (CNN model) that uses the features encoded by a deep convolutional neural network (VGG-19) with two other models: the venerable Portilla and Simoncelli model (PS) and an extension of the CNN model in which the power spectrum is additionally matched.

textures.jpg

Continue reading

More control in style transfer

We just put a preprint on arXiv describing a number of improvements to the style transfer algorithm we developed a while ago. These new features include spatial control, color control and scale control.

style-transfer

Spatial control: applying different styles to different parts of the image (panel b).

Color control: transferring only the style of a painting, but keeping the colors of the original photograph (panel c). You can find additional examples in our blog post on blog.deepart.io.

Scale control: combine small-scale features of one style with large-scale features of another.