Artem Moskalev

Hi, I am Artem 👋. I am a Research Scientist at Johnson & Johnson (Janssen R&D), where I work on reimagining drug discovery with AI. My research interests include geometric deep learning, self-supervised learning and interpretability.

Previously, I did my PhD at AMLab / VIS Lab at the University of Amsterdam, supervised by Prof. Arnold Smeulders. My PhD focus was on group-equivariant neural networks. I received MSc degree from Skolkovo Institute of Science and Technology, where I worked on inverse problems and computational imaging under the supervision of Prof. Anh-Huy Phan .

Email | CV | Google Scholar | GitHub | Twitter | LinkedIn

avatar

Selected publications

On genuine invariance learning without weight-tying

Artem Moskalev, Anna Sepliarskaia, Erik J. Bekkers, Arnold Smeulders

ICML workshop on Topology, Algebra, and Geometry in Machine Learning, 2023

We study properties and limitations of invariance learned by neural networks from the data compared to the invariance achieved through equivariant weight-tying. We next address the problem of aligning data-driven invariance learning to the genuine invariance of weight-tying models.


LieGG: Studying Learned Lie Group Generators

Artem Moskalev, Anna Sepliarskaia, Ivan Sosnovik, Arnold Smeulders

NeurIPS, 2022

We present LieGG, a method to extract symmetries learned by neural networks and to evaluate the degree to which a network is invariant to these symmetries. With LieGG, one can explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without any prior knowledge of the symmetries in the data.


Learning to Summarize Videos by Contrasting Clips

Ivan Sosnovik, Artem Moskalev, Cees Kaandorp, Arnold Smeulders

Preprint, 2022

In this paper, we formulate video summarization as a contrastive learning problem. We implement the main building blocks which allows one to convert any video analysis model into an effective video summarizer.


Contrasting quadratic assignments for set-based representation learning

Artem Moskalev, Ivan Sosnovik, Volker Fischer, Arnold Smeulders

ECCV, 2022

We go beyond contrasting individual pairs of objects by focusing on contrasting objects as sets. We use combinatorial quadratic assignment theory and derive set-contrastive objective as a regularizer for contrastive learning methods.


DISCO: accurate Discrete Scale Convolution (Best Paper Award!!!)

Ivan Sosnovik, Artem Moskalev, Arnold Smeulders

BMVC, Oral, 2021

We develop a better class of discrete scale equivariant CNNs, which are more accurate and faster than all previous methods. As a result of accurate scale analysis, they allow for a biased scene geometry estimation almost for free.


Relational Prior for Multi-Object Tracking

Artem Moskalev, Ivan Sosnovik, Arnold Smeulders

ICCV VIPriors Workshop, Oral, 2021

Tracking multiple objects individually differs from tracking groups of related objects. We propose a plug-in Relation Encoding Module which encodes relations between tracked objects to improve multi-object tracking.


How to Transform Kernels for Scale-Convolutions

Ivan Sosnovik, Artem Moskalev, Arnold Smeulders

ICCV VIPriors Workshop, 2021

To reach accurate scale equivariance, we derive general constraints under which scale-convolution remains equivariant to discrete rescaling. We find the exact solution for all cases where it exists, and compute the approximation for the rest.


Scale Equivariance Improves Siamese Tracking

Ivan Sosnovik*, Artem Moskalev*, Arnold Smeulders

WACV, 2021

In this paper, we develop the theory for scale-equivariant Siamese trackers. We also provide a simple recipe for how to make a wide range of existing trackers scale-equivariant to capture the natural variations of the target a priori.


News

Teaching

Students

Reviewing