Accepted Papers


ID Paper
5 [Oral] Bridging State and History Representations: Understanding Self-Predictive RL
Tianwei Ni, Benjamin Eysenbach, Erfan SeyedSalehi, Michel Ma, Clement Gehring, Aditya Mahajan, Pierre-Luc Bacon
24 [Oral] Learning Orthonormal Features in Self-Supervised Learning using Functional Maximal Correlation
Bo Hu, Yuheng Bu, Jose Principe
29 [Oral] Learning to Embed Time Series Patches Independently
Seunghan Lee, Taeyoung Park, Kibok Lee
52 [Oral] On the Varied Faces of Overparameterization in Supervised and Self-Supervised Learning
Matteo Gamba, Arna Ghosh, Kumar Krishna Agrawal, Blake Aaron Richards, Hossein Azizpour, Mårten Björkman
1 Benchmarking self-supervised video representation learning
Akash Kumar, Ashlesha Kumar, Vibhav Vineet, Yogesh Rawat
2 Adversarial perturbation based latent reconstruction for domain-agnostic self-supervised learning
Kuilin Chen, Sijie Tian, Chi-Guhn Lee
4 Exploring Target Representations for Masked Autoencoders
xingbin liu, Jinghao Zhou, Tao Kong
6 Augmentation-aware Self-Supervised Learning with Conditioned Projector
Marcin Przewięźlikowski, Mateusz Pyla, Bartosz Zieliński, Bartłomiej Twardowski, Jacek Tabor, Marek Śmieja
7 Does Unconstrained Unlabeled Data Help Semi-Supervised Learning?
Shuvendu Roy, Ali Etemad
8 SurgMAE: Masked Autoencoders for Long Surgical Video Analysis
Muhammad Abdullah Jamal, Omid Mohareri
9 The Triad of Failure Modes and a Possible Way Out
Emanuele Sansone
10 An Information-Theoretic Understanding of Maximum Manifold Capacity Representations
Berivan Isik, Rylan Schaeffer, Victor Lecomte, Mikail Khona, Yann LeCun, Sanmi Koyejo, Andrey Gromov, Ravid Shwartz-Ziv
11 Simple Contrastive Representation Learning for Time Series Forecasting
Xiaochen Zheng, Xingyu Chen, Manuel Schürch, Amina Mollaysa, Ahmed Allam, Michael Krauthammer
12 Language Model Training Paradigms for Clinical Feature Embeddings
Yurong Hu, Manuel Burger, Gunnar Ratsch, Rita Kuznetsova
13 Self-Distilled Representation Learning for Time Series
Felix Pieper, Konstantin Ditschuneit, Martin Genzel, Alexandra Lindt, Johannes Otterbach
14 MOFO: MOtion FOcused Self-Supervision for Video Understanding
Mona Ahmadian, Frank Guerin, Andrew Gilbert
15 Scaling may be all you need for achieving human-level object recognition with human-like visual experience
Emin Orhan
16 Self-Supervised Learning Meets Liver Ultrasound Imaging
Abder-Rahman Ali, Anthony Samir
17 No Free Lunch in Self Supervised Representation Learning
Ihab Bendidi, Adrien Bardes, Cohen Ethan, Alexis Lamiable, Guillaume Bollot, Auguste Genovesio
18 Exploring Data Augmentations on Self-/Semi-/Fully- Supervised Pre-trained Models
Shentong Mo, Zhun Sun, Chao Li
19 Augmentation matters: Representation learning for Strong Gravitational lensing
Kuan-Wei Huang, Po-Wen Chang, Joshua Fagin, James Chan, Joshua Yao-Yu Lin
20 Iterated Piecewise Affine (IPA) Approximation for Language Modeling
Davood Shamsi, Wen-yu Hua, Brian Williams
21 DAPO: Self-Supervised Domain Adaptation for 6DoF Pose Estimation
juseong jin, Eunju Jeong, Joonmyun Cho, Juni PARK, Young-Gon Kim
22 Soft Contrastive Learning for Time Series
Seunghan Lee, Taeyoung Park, Kibok Lee
23 Visualizing the loss landscape of Self-supervised Vision Transformer
Youngwan Lee, Jeffrey Willette, Jonghee Kim, Sung Ju Hwang
26 HyperMAE: Modulating Implicit Neural Representations for MAE Training
Varun Belagali, Lei Zhou, Xiang Li, Dimitris Samaras
27 Making Self-supervised Learning Robust to Spurious Correlation via Learning-speed Aware Sampling
Weicheng Zhu, Sheng Liu, Carlos Fernandez-Granda, Narges Razavian
28 Ring Attention with Blockwise Transformers for Near-Infinite Context
Hao Liu, Matei Zaharia, Pieter Abbeel
30 Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoi-Rin Kim
31 Neurosymbolic Grounding for Compositional Generalization
Atharva Sehgal, Arya Grayeli, Jennifer Sun, Swarat Chaudhuri
32 Adaptive Resolution Loss: An Efficient and Effective Loss for Time Series Hierarchical Contrastive Self-Supervised Learning Framework
Kevin Garcia, Juan Manuel Perez Jr, Yifeng Gao
33 Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations
Cian Eastwood, Julius von Kügelgen, Linus Ericsson, Diane Bouchacourt, Pascal Vincent, Bernhard Schölkopf, Mark Ibrahim
34 On Improving the Sample Efficiency of Non-Contrastive SSL
Kumar Agrawal, Arna Ghosh, Adam Oberman, Blake Richards
35 Unsupervised Segmentation of Colonoscopy Images
Heming Yao, Jérôme Lüscher, Benjamin Gutierrez Becker, Josep Arús-Pous, Tommaso Biancalani, Amelie Bigorgne, David Richmond
36 WERank: Rank Degradation Prevention for Self-Supervised Learning via Weight Regularization
Ali Pasand, Reza Moravej, Mahdi Biparva, Ali Ghodsi
37 BarcodeBERT: Transformers for Biodiversity Analysis
Pablo Millan Arias, Niousha Sadjadi, Monireh Safari, ZeMing Gong, Austin Wang, Scott Lowe, Joakim Haurum, Iuliia Zarubiieva, Dirk Steinke, Lila Kari, Angel Chang, Graham Taylor
39 Self-supervised Learning for User Sequence Modeling
Yuhan Liu, Lin Ning, Neo Wu, Karan Singhal, Philip Andrew Mansfield, Devora Berlowitz, Bradley Green
40 Non-Vacuous Generalization Bounds for Large Language Models
Sanae Lotfi, Marc Finzi, Yilun Kuang, Tim Rudner, Micah Goldblum, Andrew Wilson
41 Evolving Graph Generalization Estimation via Self-Supervised Learning
Bin Lu, Tingyan Ma, Xiaoying Gan, Luoyi Fu, Xinbing Wang, Chenghu Zhou, Shiyu Liang
42 SAMCLR: Contrastive pre-training on complex scenes using SAM for view sampling
Benjamin Missaoui, Chongbin Yuan
43 Learning Beyond Similarities: Incorporating Dissimilarities between Positive Pairs in Self-Supervised Time Series Learning.
Adrian Atienza, Jakob E. Bardram, Sadasivan Puthusserypady
44 A Simple Framework for Self-Supervised Learning of Sample-Efficient World Models
Jan Robine, Marc Höftmann, Stefan Harmeling
45 Leveraging Uniformity of Normalized Embeddings for Sequential Recommendation
Hyunsoo Chung, Jungtaek Kim
46 MeSa: Masked, Geometric, and Supervised Pre-training for Monocular Depth Estimation
Muhammad Osama Khan, Junbang Liang, Chun-Kai Wang, Shan Yang, Yu Lou
47 Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning
Sharut Gupta, Joshua Robinson, Derek Lim, Soledad Villar, Stefanie Jegelka
48 Improving Domain Generalization in Contrastive Learning Using Adaptive Temperature Control
Katie Matton, Robert A Lewis, Rosalind Picard, John Guttag
49 Uncertainty Quantification using Deep Ensembles for Safety-Critical Predictive Models
Oishi Deb, Emmanouil Benetos, Philip Torr
50 Multimodal Distillation of CLIP Models
Georgios Smyrnis, Sriram Ravula, sujay sanghavi, Alex Dimakis
51 LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures
Vimal Thilak, Chen Huang, Omid Saremi, Laurent Dinh, Hanlin Goh, Preetum Nakkiran, Joshua M. Susskind, Etai Littwin
53 Bootstrap Your Own Variance
Polina Turishcheva, Jason Ramapuram, Sinead Williamson, Dan Busbridge, Eeshan Gunesh Dhekane, Russell Webb
54 Online Feature Updates Improve Online (Generalized) Label Shift Adaptation
Ruihan Wu, Siddhartha Datta, Yi Su, Dheeraj Baby, Yu-Xiang Wang, Kilian Q Weinberger
55 Enhancing CLIP with a Third Modality
Efthymios Tsaprazlis, Georgios Smyrnis, Alex Dimakis, Petros Maragos
56 Self-Supervised Pretraining for Improved Downstream Decoding of Audio-Evoked fMRI Sequences
Sean Paulsen, Michael Casey
58 Spectral Temporal Contrastive Learning
Sacha Morin, Somjit Nath, Samira Ebrahimi Kahou, Guy Wolf
59 Self-supervised Representation Learning from Random Data Projectors
Yi Sui, Tongzi Wu, Jesse C. Cresswell, Ga Wu, George Stein, Xiao Shi Huang, Xiaochen Zhang, Maksims Volkovs
60 Perceptual Group Tokenizer: Building Perception with Iterative Grouping
Zhiwei Deng, Ting Chen, Yang Li
61 Posterior Sampling on Simsiam: Rethinking Optimization in Siamese Self-Supervised Learning
Daniel De Mello, Ruqi Zhang, Bruno Ribeiro
62 MolSiam: Simple Siamese Self-supervised Representation Learning for Small Molecules
Joshua Yao-Yu Lin, Michael Maser, Nathan C. Frey, Gabriele Scalia, Omar Mahmood, Pedro O. Pinheiro, Ji Won Park, Stephen Ra, Andrew Martin Watkins, Kyunghyun Cho
63 Self-Supervised Image Captioning with CLIP
Chuanyang Jin
64 Generalization properties of contrastive world models
Kandan Ramakrishnan, R. James Cotton, Xaq Pitkow, Andreas S. Tolias
65 CCA with Shared Weights for Self-Supervised Learning
James Chapman, Lennie Wells
66 Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery
Sarah Rastegar, Hazel Doughty, Cees G. M. Snoek
68 No Representation Rules Them All in Category Discovery
Sagar Vaze, Andrea Vedaldi, Andrew Zisserman
69 Generalized Category Discovery with Hierarchical Label Smoothing
Sarah Rastegar, Yuki M Asano, Hazel Doughty, Cees G. M. Snoek
70 Zero-shot Clustering of Embeddings with Self-Supervised Learnt Encoders
Scott C Lowe, Joakim Bruslund Haurum, Sageev Oore, Thomas B. Moeslund, Graham W. Taylor
71 FroSSL: Frobenius Norm Minimization for Self-Supervised Learning
Oscar Skean, Aayush Dhakal, Nathan Jacobs, Luis Gonzalo Sanchez Giraldo
72 Multi-Task Learning with Self-Supervised Objectives can Improve Worst-Group Outcomes
Atharva Kulkarni, Lucio M. Dery, Amrith Setlur, Aditi Raghunathan, Ameet Talwalkar, Graham Neubig
74 How does semi-supervised learning with pseudo-labelers work? A case study
Yiwen Kou, Zixiang Chen, Yuan Cao, Quanquan Gu
75 Understanding Self-Supervised Features for Learning Unsupervised Instance Segmentation
Paul Engstler, Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina
76 Application of Self Supervised Vision Transformers for Multiplexed Microscopy Images and Its Challenges
Gantugs Atarsaikhan, Isabel Mogollon, Katja Välimäki, Teijo Pellinen, Tuomas Mirtti, Lassi Paavolainen
77 Learning Useful Representations of Recurrent Neural Network Weight Matrices
Vincent Herrmann, Francesco Faccio, Jürgen Schmidhuber
78 Language-Conditioned Semantic Search-Based Policy for Robotic Manipulation Tasks
Jannik Sheikh, Andrew Melnik, Gora Chand Nandi, Robert Haschke
79 Can semi-supervised learning use all the data effectively? A lower bound perspective
Alexandru Tifrea, Gizem Yüce, Amartya Sanyal, Fanny Yang
80 Identifiable attribution maps using regularized contrastive learning
Steffen Schneider, Rodrigo González Laiz, Markus Frey, Mackenzie W Mathis