I am a scientist based in Tübingen, Germany. Working at the intersection of computational neuroscience, machine learning and computer vision, I would like to understand how neural systems – both biological and artificial – perform visual perception.

Learn more about my research and my publications.


NIPS 2018 paper on domain transfer in recurrent models for large-scale neural prediction on video

movie_net.pngTogether with Fabian Sinz, we developed a deep recurrent neural network for predicting the activity of thousands of mouse V1 neurons simultaneously recorded with two-photon microscopy, while accounting for confounding factors such as the animal’s gaze position and brain state changes related to running state and pupil dilation. We investigated how well this large-scale model generalizes to stimulus statistics it was not trained on. While our model trained on natural movies can correctly predict some neural tuning properties in responses to artificial noise stimuli, unadapted transfer is not perfect. However, it can fully generalize from movies to noise and maintain high predictive performance on both stimulus domains by fine-tuning only the final layer’s weights. Check out the preprint on bioRxiv.

New preprint on understanding V1 computation using rotation-equivariant neural networks

meis.pngI developed an approach to organize and classify neurons in V1 according to their nonlinear computation, ignoring receptive field location and preferred orientation. We use a rotation-equivariant convolutional network to perform weight sharing not only across space, but also across orientation. Our preprint describes the approach and some early results we obtained using recordings of around 6000 neurons in mouse V1.

ECCV paper on visualization of invariances in convnets

featurevis.pngThe final version of our ECCV 2018 paper on visualizing invariances in convolutional neural networks is available. We find that early and mid-level convolutional layers in VGG-19 exhibit various forms of response invariance: near-perfect phase invariance in some units and invariance to local diffeomorphic transformations in others. At the same time, we uncover representational differences with ResNet-50 in its corresponding layers.

Work with Santiago Cadena, Marissa Weis, Leon Gatys and Matthias Bethge.

Continue reading

New paper on the effect of attentional fluctuations on correlated variability in V1

Variability in neuronal responses to identical stimuli is frequently correlated across a population. Attention is thought to reduce these correlations by suppressing noisy inputs shared by the population. However, even with precise control of the visual stimulus, the subject’s attentional state varies across trials. In 2016, we put out the hypothesis that such fluctuations in attentional state could be a cause for some of the correlated variability observed in cortical areas. To address this question empirically, we designed a novel paradigm that allows us to manipulate the strength of attentional fluctuations.

In the new paper just published in Nature Communications, we recorded from monkeys’ primary visual cortex (V1) while they were performing this task. We found both a pronounced effect of attentional fluctuations on correlated variability at long timescales and attention-dependent reductions in correlations at short timescales. These effects predominate in layers 2/3, as expected from a feedback signal such as attention.


Paper on one-shot segmentation at ICML

Our paper on one-shot segmentation in clutter has been accepted to ICML. In this paper, we tackle a one-shot visual search task: based on a single instruction example (the red Φ in the image below), the goal is to find the same letter in a cluttered image that consists of many letters (left) and segment it. This task is pretty hard for computer vision systems, because the image clutter consists of other letters (i.e. very similar statistics), the letters can have arbitrary colors, are drawn by different people, transformed by affine transformations, and have not been seen during training.


Continue reading

Welcome Mara & Max

Marissa Weis and Max Günthner have started their Master’s thesis projects on March 1st. Mara will be working on image processing using foveated image representations. Max will be investigating nonlinearities in neural responses in primary visual cortex using techniques to visualize convolutional neural networks.

Review on texture and art with deep neural networks

Our review on “Texture and Art with Deep Neural Networks” (free version) has been published online and will appear in the October issue of Current Opinion in Neurobiology.

In the review, written by Leon Gatys, Matthias Bethge and myself, we discuss recent advances in texture synthesis using Convolutional Neural Networks (CNNs) that were motivated by visual neuroscience and have led to a substantial advance in image synthesis and manipulation in computer vision. We also discuss how these advanecs can in turn inspire new research in visual perception and computational neuroscience.

Preprint of paper on human texture perception available

Our psychophysical evaluation of our CNN-based texture model is now available on bioRxiv. In the study led by Tom Wallis, we compared our recent parameteric model of texture appearance (CNN model) that uses the features encoded by a deep convolutional neural network (VGG-19) with two other models: the venerable Portilla and Simoncelli model (PS) and an extension of the CNN model in which the power spectrum is additionally matched.


Continue reading

More control in style transfer

We just put a preprint on arXiv describing a number of improvements to the style transfer algorithm we developed a while ago. These new features include spatial control, color control and scale control.


Spatial control: applying different styles to different parts of the image (panel b).

Color control: transferring only the style of a painting, but keeping the colors of the original photograph (panel c). You can find additional examples in our blog post on blog.deepart.io.

Scale control: combine small-scale features of one style with large-scale features of another.