Hi, I am Artem 👋. I am a Research Scientist at Johnson & Johnson (Janssen R&D), where I work on reimagining drug discovery with AI. My research interests include geometric deep learning, self-supervised learning and language models.
Previously, I did my PhD at AMLab / VIS Lab at the University of Amsterdam, supervised by Prof. Arnold Smeulders. My PhD focus was on group-equivariant neural networks. I received MSc degree from Skolkovo Institute of Science and Technology, where I worked on inverse problems and computational imaging under the supervision of Prof. Anh-Huy Phan .
SE(3)-Hyena Operator for Scalable Equivariant Learning (Outstanding Paper Award!!!)
ICML: Geometry-grounded Representation Learning and Generative Modeling, 2024
We introduce SE(3)-Hyena operator, a translation and rotation equivariant long-convolutional method to process global geometric context at scale with sub-quadratic complexity. Significantly more compute and memory efficient than transformers.
On genuine invariance learning without weight-tying
ICML: Topology, Algebra, and Geometry in Machine Learning, 2023
We study properties and limitations of invariance learned by neural networks from the data compared to the invariance achieved through equivariant weight-tying. We next address the problem of aligning data-driven invariance learning to the genuine invariance of weight-tying models.
LieGG: Studying Learned Lie Group Generators
NeurIPS, 2022
We present LieGG, a method to extract symmetries learned by neural networks and to evaluate the degree to which a network is invariant to these symmetries. With LieGG, one can explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without any prior knowledge of the symmetries in the data.
Learning to Summarize Videos by Contrasting Clips
Preprint, 2022
In this paper, we formulate video summarization as a contrastive learning problem. We implement the main building blocks which allows one to convert any video analysis model into an effective video summarizer.
Contrasting quadratic assignments for set-based representation learning
ECCV, 2022
We go beyond contrasting individual pairs of objects by focusing on contrasting objects as sets. We use combinatorial quadratic assignment theory and derive set-contrastive objective as a regularizer for contrastive learning methods.
DISCO: accurate Discrete Scale Convolution (Best Paper Award!!!)
BMVC, Oral, 2021
We develop a better class of discrete scale equivariant CNNs, which are more accurate and faster than all previous methods. As a result of accurate scale analysis, they allow for a biased scene geometry estimation almost for free.
Relational Prior for Multi-Object Tracking
ICCV: VIPriors, Oral, 2021
Tracking multiple objects individually differs from tracking groups of related objects. We propose a plug-in Relation Encoding Module which encodes relations between tracked objects to improve multi-object tracking.
How to Transform Kernels for Scale-Convolutions
ICCV: VIPriors, 2021
To reach accurate scale equivariance, we derive general constraints under which scale-convolution remains equivariant to discrete rescaling. We find the exact solution for all cases where it exists, and compute the approximation for the rest.
Scale Equivariance Improves Siamese Tracking
WACV, 2021
In this paper, we develop the theory for scale-equivariant Siamese trackers. We also provide a simple recipe for how to make a wide range of existing trackers scale-equivariant to capture the natural variations of the target a priori.