Leveraging time irreversibility with order-contrastive pre-training Conference Paper


Authors: Agrawal, M.; Lang, H.; Offin, M.; Gazit, L.; Sontag, D.
Title: Leveraging time irreversibility with order-contrastive pre-training
Conference Title: 25th International Conference on Artificial Intelligence and Statistics
Abstract: Label-scarce, high-dimensional domains such as healthcare present a challenge for modern machine learning techniques. To overcome the difficulties posed by a lack of labeled data, we explore an "order-contrastive" method for self-supervised pre-training on longitudinal data. We sample pairs of time segments, switch the order for half of them, and train a model to predict whether a given pair is in the correct order. Intuitively, the ordering task allows the model to attend to the least time-reversible features (for example, features that indicate progression of a chronic disease). The same features are often useful for downstream tasks of interest. To quantify this, we study a simple theoretical setting where we prove a finite-sample guarantee for the downstream error of a representation learned with order-contrastive pre-training. Empirically, in synthetic and longitudinal healthcare settings, we demonstrate the effectiveness of order-contrastive pre-training in the small-data regime over supervised learning and other self-supervised pre-training baselines. Our results indicate that pre-training methods designed for particular classes of distributions and downstream tasks can improve the performance of self-supervised learning.
Journal Title Proceedings of Machine Learning Research
Volume: 151
Conference Dates: 2022 Mar 28-30
Conference Location: Virtual
ISBN: 2640-3498
Publisher: Journal Machine Learning Research  
Date Published: 2022-01-01
Start Page: 2330
End Page: 2353
Language: English
ACCESSION: WOS:000828072702017
PROVIDER: wos
Notes: Proceedings Paper -- International Conference on Artificial Intelligence and Statistics -- MAR 28-30, 2022 -- ELECTR NETWORK -- Source: Wos