Latest news

[Oct 24] Paper accepted for publication at WACV 2025 - Efficient Progressive Image Compression with Variance-aware Masking
[Sep 24] Paper accepted for publication at NeurIPS 2024 - Activation Map Compression through Tensor Decomposition for Deep Learning
[Sep 24] I gave a Nectar track talk at ECML-PKDD 2024 for Mining bias-target Alignment from Voronoi Cells . Slides are available here
[Sep 24] I have been awarded the HDR!
[Aug 24] Paper accepted for publication at WACV 2025 - WiGNet: Windowed Vision Graph Neural Network
[Aug 24] Paper accepted for publication at CADL 2024 (ECCV workshop) - Memory-Optimized Once-For-All network
[Aug 24] Paper accepted for publication at ICPR 2024 - WaterMAS: Sharpness-Aware Maximization for Neural Network Watermarking
[Jul 24] Paper accepted for publication at BMVC 2024 - Trimming the Fat: Efficient Compression of 3D Gaussian Splats through Pruning
[Jul 24] Paper accepted for publication at ECCV 2024 - Debiasing surgeon: fantastic weights and how to find them
[Jul 24] Paper accepted for publication at ECCV 2024 - Weighted Ensemble Models Are Strong Continual Learners
[Jul 24] I have been appointed the Associate Professor affiliation at Institut Polytechnique de Paris!
[Jun 24] The ANR project "BANERA: Bias-Aware NEural aRchitecture seArch" has been accepted for funding.
Soon a PhD and a PostDoc to be recruited!
[Jun 24] Paper accepted for publication at ICIP 2024 - GABIC: Graph-based Attention Block for Image Compression
Paper accepted for publication at DCC 2024 as oral - Domain Adaptation for learned image compression with supervised Adapters
Paper accepted for publication at WACV 2024 - Mini but Mighty: Finetuning ViTs with Mini Adapters

Stages application for February 2025 are now open!

Mission

In a world where deep learning is becoming more and more state-of-the-art, where the race to the computational capabilities determines the new technologies, it is crucial to open the black box deep learning is. Many good-willing researchers are already moving important steps in such direction, despite a wide multitiude and hetereogeneity of scientific backgrounds. This is good, this is progress!
We target it in the long term developing techniques which simplify these models. Some are easier to prune than others: why? How is the information being processed inside a deep model, from a macroscopic perspective? These are few of the questions to be answered to move in the right direction!

Green AI

Remotion of unnecessary neurons and/or synapses towards reduction of power consumption.

Model debiasing

Understand biases in data and cure the trained model.

Privacy in AI

Guaranteeing privacy in AI will be an important theme in the next years.

Understand the information flow

Modeling how the information is processed in deep models is our final goal.

Currently working with

Ekaterina Iakovleva

PostDoc
Between theory and practice in Deep Learning

Giommaria Pilo

Research Engineer
Deployment of frugal and efficient AI at the edge

Imad Eddine Marouf

PhD student
Efficient transformers for computer vision
Previously: Stage M2 Feb-Sep 2022

Victor Quétu

PhD student
Regularization for deep learning


Rémi Nahon

PhD student
Debiasing in Deep neural networks
(Memory and energy efficient AI)

Yinghao Wang

PhD student
Foundation models for EEG treatment


Zhu Liao

PhD student
Material accelerator for AIoT



Aël Quélennec

PhD student
On-device learning


Gabriele Spadaro

PhD student
Compression with Graph Neural Networks
Co-tutelle with University of Turin, Italy

Dorian Gailhard

PhD student
Generative models and Graph Neural Networks for SoCs

Carl De Sousa Trias

PhD student
Watermarking deep models
(Informal advising, funded within the project NewEMMA)

Lê Trung Nguyen

PhD student
Methods for on-device training

Frédéric Lauron

Stage M2
Low-level folding for Neural Networks

Ivan Khodakov

Free stage
Methods for debiasing and beyond

Nathan Roos

Research path student
Unsupervised debiasing modelization

Haicheng Wang

PRIM project
Deep neural network pruning

Zhemeng Yu

PRIM project
Deep neural network pruning

Petro Schulzhenko

PRIM project
Distilling depth-compression

Formerly advising