Research
I am interested in foundational representation learning, with a focus on effective training methodologies and pre-training strategies. My work primarily explores contrastive learning frameworks, including supervised variants that leverage label information. A key application area for these techniques has been the use of Transformer architectures for stereo-EEG signal processing. I also have experience adapting Transformer models for computer vision tasks.
Previously, I have applied related representation learning concepts using Graph Neural Networks, though my current focus is more aligned with the broader paradigms underlying representation learning. Looking ahead, I am keen to explore the potential of these techniques within Natural Language Processing and Reinforcement Learning.
|
|
Spatial Contrastive Pre-Training of Transformer Encoders for sEEG-based Seizure Onset Zone Detection
Zacharie Rodière,
Pierre Borgnat,
Paulo Gonçalves
Graph Signal Processing (GSP) Workshop, MILA, 2025
workshop page
/
extended abstract
Leveraging clinically-informed time-frequency features and spatial contrastive pre-training within a Transformer encoder for improved Seizure Onset Zone (SOZ) localization from stereo-EEG.
|
|
A deep learning-based pipeline for the conservation assessment of bindings in archives and libraries
Valérie Lee-Gouet,
Yamoun Lahcen,
Zacharie Rodière,
Camille Simon Chane,
Michel Jordan,
Julien Longhi,
David Picard
Multimedia Tools and Applications, 2025 (Published Online)
[DOI]
Developed and evaluated a Vision Transformer-based system for multi-label classification of defects on historical book bindings to aid conservation efforts.
|
|