Cian Eastwood

Profile photo

I'm a 6th-year PhD candidate in Machine Learning, co-advised by Chris Williams at the University of Edinburgh and Bernhard Schölkopf at the Max Planck Institute for Intelligent Systems in Tübingen.

My research seeks to prepare models for (and adapt models to) new contexts. This includes self-supervised learning, representation learning, fine-tuning, in-context learning and out-of-distribution generalization.

During my PhD, I have spent time at Google DeepMind, Meta AI and Spotify Research.

  /     /     /     /     /  

profile photo
Recent News
Selected Publications
GIVT: Generative Infinite-Vocabulary Transformers

M Tschannen, C Eastwood, F Mentzer

Preprint 2023

We introduce generative transformers for real-valued vector sequences, rather than discrete tokens from a finite vocabulary.

Spuriosity Didn't Kill the Classifier: Using Invariant Predictions to Harness Spurious Features

C Eastwood*, S Singh*, A Nicolicioiu, M Vlastelica, J von Kügelgen, B Schölkopf

NeurIPS 2023

Code

We show how a weak-but-stable training signal can be used to harness complementary spurious features, boosting performance.

(Previously @ ICML 2023 Spurious Correlations Workshop)

Probable Domain Generalization via Quantile Risk Minimization

C Eastwood*, A Robey*, S Singh, J von Kügelgen, H Hassani, G J Pappas, B Schölkopf

NeurIPS 2022

Code / Video

We learn predictors that generalize with a desired probability and argue for better evaluation protocols in domain generalization.

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration

C Eastwood*, I Mason*, CKI Williams, B Schölkopf

ICLR 2022 (Spotlight)

Code

We address "measurement shift" (e.g., a new hospital scanner) by restoring the same features rather than learning new ones.

A Framework for the Quantitative Evaluation of Disentangled Representations

C Eastwood, CKI Williams

ICLR 2018

Code

We propose the DCI framework for evaluating "disentangled" representations.

(Previously Spotlight @ NeurIPS 2017 Disentanglement workshop)

Other Publications
Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations

C Eastwood, J von Kügelgen, L Ericsson, D Bouchacourt, P Vincent, B Schölkopf, M Ibrahim

Preprint 2023

We use data augmentations to disentangle rather than discard.

(Previously @ NeurIPS 2023 workshops on Self-Supervised Learning and Causal Representation Learning)

DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability

C Eastwood*, A Nicolicioiu*, J von Kügelgen*, A Kekić, F Träuble, A Dittadi, B Schölkopf

ICLR 2023

Code

We extend the DCI framework by quantifying the ease-of-use or explicitness of a representation.

(Previously @ UAI 2022 Causal Repr. Learning Workshop)

Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences

C Eastwood*, N Li*, CKI Williams

ICLR 2022 Workshop: Objects, Structure and Causality

We propose a framework for explaining object-image differences in terms of the underlying object properties.

Unit-Level Surprise in Neural Networks

C Eastwood*, I Mason*, CKI Williams

NeurIPS 2021 Workshop: I Can't Believe it's Not Better (Spotlight & Didactic Award) and PMLR

Code / Video

We use surprising unit-level activations to determine which parameters to adapt for a given distribution shift.

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views

N Li, C Eastwood, B Fisher

NeurIPS 2020 (Spotlight)

Code / Video

We learn accurate, object-centric representations of 3D scenes by aggregating information from multiple 2D views/observations.

Website source