Cian Eastwood
I'm a Senior Research Scientist at Valence Labs, building tools that enable scientific discovery from high-dimensional data.
My current research focuses on causal and multimodal generative models, self-supervised representation learning, and out-of-distribution generalization.
Previously, I did my PhD at the University of Edinburgh (with Chris Williams)
and the Max Planck Institute for Intelligent Systems (with Bernhard Schölkopf). During my PhD, I spent time at Google DeepMind, Meta AI (FAIR) and Spotify.
     
     
     
     
     
|
|
|
GIVT: Generative Infinite-Vocabulary Transformers
M Tschannen,
C Eastwood,
F Mentzer
ECCV 2024
Code
We introduce generative transformers for real-valued vector sequences, rather than discrete tokens from a finite vocabulary.
|
|
Spuriosity Didn't Kill the Classifier: Using Invariant Predictions to Harness Spurious Features
C Eastwood*,
S Singh*,
A Nicolicioiu,
M Vlastelica,
J von Kügelgen,
B Schölkopf
NeurIPS 2023
Code
We show how a weak-but-stable training signal can be used to harness complementary spurious features, boosting performance.
(Previously @ ICML 2023
Spurious Correlations Workshop)
|
|
Probable Domain Generalization via Quantile Risk Minimization
C Eastwood*,
A Robey*,
S Singh,
J von Kügelgen,
H Hassani,
G J Pappas,
B Schölkopf
NeurIPS 2022
Code
/
Video
We learn predictors that generalize with a desired probability and argue for better evaluation protocols in domain generalization.
|
|
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
C Eastwood*,
I Mason*,
CKI Williams,
B Schölkopf
ICLR 2022 (Spotlight)
Code
We address "measurement shift" (e.g., a new hospital scanner) by restoring the same features rather than learning new ones.
|
|
A Framework for the Quantitative Evaluation of Disentangled Representations
C Eastwood,
CKI Williams
ICLR 2018
Code
We propose the DCI framework for evaluating "disentangled"
representations.
(Previously Spotlight @ NeurIPS 2017 Disentanglement workshop)
|
|
Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models
K Donhauser,
K Ulicna,
GE Moran,
A Ravuri,
K Kenyon-Dean,
C Eastwood,
J Hartford
Preprint 2024
We extract biological concepts from microscopy foundation models using dictionary learning.
(Previously Oral @ NeurIPS 2024 workshop on Interpretable AI)
|
|
Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations
C Eastwood,
J von Kügelgen,
L Ericsson,
D Bouchacourt,
P Vincent,
B Schölkopf,
M Ibrahim
Preprint 2023
Code
We use data augmentations to disentangle rather than discard.
(Previously @ NeurIPS 2023 workshops on Self-Supervised Learning
and Causal Representation Learning)
|
|
DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability
C Eastwood*,
A Nicolicioiu*,
J von Kügelgen*,
A Kekić,
F Träuble,
A Dittadi,
B Schölkopf
ICLR 2023
Code
We extend the DCI framework by quantifying the ease-of-use or explicitness of a representation.
(Previously @ UAI 2022 Causal Repr. Learning Workshop)
|
|
Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences
C Eastwood*,
N Li*,
CKI Williams
ICLR 2022 Workshop: Objects, Structure and Causality
We propose a framework for explaining object-image differences in terms of the underlying object properties.
|
|
Unit-Level Surprise in Neural Networks
C Eastwood*,
I Mason*,
CKI Williams
NeurIPS 2021 Workshop: I Can't Believe it's Not Better (Spotlight & Didactic Award) and PMLR
Code
/
Video
We use surprising unit-level activations to determine which parameters
to adapt for a given distribution shift.
|
|
Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
N Li,
C Eastwood,
B Fisher
NeurIPS 2020 (Spotlight)
Code
/
Video
We learn accurate, object-centric representations of 3D scenes by aggregating information from
multiple 2D views/observations.
|
|