Hi, I am Artem 👋. I am a Research Scientist at Johnson&Johnson, where I work on reimagining drug discovery with AI. My research focuses on geometric deep learning and language models, with a keen interest in developing geometry-aware methods that efficiently learn from unlabeled data.
Previously, I did my PhD at AMLab / VIS Lab at the University of Amsterdam, supervised by Prof. Arnold Smeulders. My PhD focus was on group-equivariant neural networks. I received MSc degree from Skolkovo Institute of Science and Technology, where I worked on inverse problems and computational imaging under the supervision of Prof. Anh-Huy Phan.
I love history and learning about cultures. I like folk and metal music. In my free time, I enjoy playing chess and padel.
Email | CV | LinkedIn | X (Twitter) | Bluesky | GitHub | Google Scholar
Geometric Hyena Networks for Large-scale Equivariant Learning
ICML, Spotlight, 2025
Existing equivariant models are either computationally expensive or lose global structure. We introduce Geometric Hyena, the first equivariant long-convolutional model, combining sub-quadratic complexity with rotational and translational equivariance. On RNA property prediction and protein dynamics, it outperforms prior models while using far less memory and compute, processing 30k tokens 20× faster and enabling 72× longer context than equivariant transformers.
InfoSEM: A Deep Generative Model with Informative Priors for Gene Regulatory Network Inference
ICML, 2025
We present InfoSEM, an unsupervised model for GRN inference that uses pretrained gene embeddings and known interactions as priors. Unlike supervised methods that often exploit dataset-specific biases, InfoSEM captures true biological signals and achieves state-of-the-art performance on a biologically grounded benchmarks.
HARMONY: A Multi-Representation Framework for RNA Property Prediction
ICLR: AI for Nucleic Acids, Oral, 2025
We introduce HARMONY, a neural network that dynamically integrates 1D, 2D, and 3D representations, and seamlessly adapts to diverse real-world scenarios. Our experiments demonstrate that HARMONY consistently outperforms existing baselines across multiple RNA property prediction tasks on established benchmarks, offering a robust and generalizable approach to RNA modeling.
HELM: Hierarchical Encoding for mRNA Language Modeling
ICLR, 2025
We introduce Hierarchical Encoding for mRNA Language Modeling (HELM), a novel pre-training strategy that incorporates codon-level hierarchical structure into language model training. HELM modulates the loss function based on codon synonymity, aligning the model's learning process with the biological reality of mRNA sequences.
Beyond Sequence: Impact of Geometric Context for RNA Property Prediction
ICLR, 2025
We present the first systematic study of incorporating geometric context—beyond 1D sequences—into RNA property prediction. We reveal that geometry-aware models are more accurate while requiring less training data. At the same time, plain sequence-based models are the most robust to sequencing noise.
SE(3)-Hyena Operator for Scalable Equivariant Learning (Best Paper Award!!!)
ICML: Geometry-grounded Representation Learning and Generative Modeling, 2024
We introduce SE(3)-Hyena operator, a translation and rotation equivariant long-convolutional method to process global geometric context at scale with sub-quadratic complexity. Significantly more compute and memory efficient than transformers.
On genuine invariance learning without weight-tying
ICML: Topology, Algebra, and Geometry in Machine Learning, 2023
We study properties and limitations of invariance learned by neural networks from the data compared to the invariance achieved through equivariant weight-tying. We next address the problem of aligning data-driven invariance learning to the genuine invariance of weight-tying models.
LieGG: Studying Learned Lie Group Generators
NeurIPS, Spotlight, 2022
We present LieGG, a method to extract symmetries learned by neural networks and to evaluate the degree to which a network is invariant to these symmetries. With LieGG, one can explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without any prior knowledge of the symmetries in the data.
Contrasting quadratic assignments for set-based representation learning
ECCV, 2022
We go beyond contrasting individual pairs of objects by focusing on contrasting objects as sets. We use combinatorial quadratic assignment theory and derive set-contrastive objective as a regularizer for contrastive learning methods.
DISCO: accurate Discrete Scale Convolution (Best Paper Award!!!)
BMVC, Oral, 2021
We develop a better class of discrete scale equivariant CNNs, which are more accurate and faster than all previous methods. As a result of accurate scale analysis, they allow for a biased scene geometry estimation almost for free.
Relational Prior for Multi-Object Tracking
ICCV: VIPriors, Oral, 2021
Tracking multiple objects individually differs from tracking groups of related objects. We propose a plug-in Relation Encoding Module which encodes relations between tracked objects to improve multi-object tracking.
Scale Equivariance Improves Siamese Tracking
WACV, 2021
In this paper, we develop the theory for scale-equivariant Siamese trackers. We also provide a simple recipe for how to make a wide range of existing trackers scale-equivariant to capture the natural variations of the target a priori.