Research
My research focuses on developing more effective and interpretable representation learning methods, particularly for high-dimensional biomedical signals and multimodal data. I work at the intersection of contrastive learning, Transformer architectures, and domain-specific knowledge integration to solve real-world problems.
A key contribution of my work is the development of spatial contrastive pre-training strategies for Transformer encoders applied to stereo-EEG signal analysis. This approach leverages the spatial relationships between electrodes to improve seizure onset zone detection, demonstrating how domain knowledge can enhance representation learning in clinical applications.
My research has spanned multiple domains: from adapting Transformer architectures for computer vision tasks (as demonstrated in my work on historical document conservation) to applying Graph Neural Networks for representation learning.
|
|
Spatial Contrastive Pre-Training of Transformer Encoders for sEEG-based Seizure Onset Zone Detection
Zacharie Rodière,
Pierre Borgnat,
Paulo Gonçalves
Graph Signal Processing (GSP) Workshop, MILA, 2025
Workshop Page
/
Extended Abstract
/
Poster
/
Internship Report
Leveraging clinically-informed time-frequency features and spatial contrastive pre-training within a Transformer encoder for improved Seizure Onset Zone (SOZ) localization from stereo-EEG.
|
|
A deep learning-based pipeline for the conservation assessment of bindings in archives and libraries
Valérie Lee-Gouet,
Yamoun Lahcen,
Zacharie Rodière,
Camille Simon Chane,
Michel Jordan,
Julien Longhi,
David Picard
Multimedia Tools and Applications, 2025 (Published Online)
[DOI]
Developed and evaluated a Vision Transformer-based system for multi-label classification of defects on historical book bindings to aid conservation efforts.
|
|