You may also find my recent articles on Google Scholar.
*: equal contribution †: co-corresponding
Hybrid generative-contrastive representation learning
Saehoon Kim, Sungwoong Kim, Juho Lee
Learning to pool in graph neural networks for extrapolation
Jihoon Ko, Taehyung Kwon, Kijung Shin, Juho Lee
Deep amortized clustering
Juho Lee, Yoonho Lee, Yee Whye Teh
A preliminary version of this work has been accepted to NeurIPS 2019 Sets & Partitions workshop as an oral presentation.
Adaptive network sparsification with dependent variational beta-Bernoulli dropout
Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
Over-parameterised shallow neural networks with asymmetrical node scaling: global convergence guarantees and feature learning
François Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang
Transactions on Machine Learning Research, 2025
Deep neural networks with dependent weights: Gaussian process mixture limit, heavy tails, sparsity and compressibility)
Hoil Lee, Fadhel Ayed, Paul Jung, Juho Lee, Hongseok Yang, François Caron
Journal of Machine Learning Research, September 2023
A unified construction for series representations and finite approximations of completely random measures
Juho Lee, Xenia Miscouridou, François Caron
Bernoulli, August 2023
The Normal-Generalised Gamma-Pareto process: A novel pure-jump Lévy process with flexible tail and jump-activity properties
Fadhel Ayed, Juho Lee, François Caron
Bayesian Analysis, December 2022
Benefits of stochastic weight averaging in developing neural network radiation scheme for numerical weather prediction
Hwan-Jin Song, Soonyoung Roh, Juho Lee, Giung Nam, Eunggu Yun, Jongmin Yoon, Park Sa Kim
Journal of Advances in Modeling Earth Systems, October 2022
Stochastic optimal control for continuous-time fMRI representation learning
Joonhyeong Park*, Byoungwoo Park*, Chang-Bae Bang, Jungwon Choi, Hyungjin Chung, Byung-Hoon Kim, Juho Lee
To appear in ICLR 2026
Soft equivariance regularization for invariant self-supervised learning
Joohyung Lee, Changhun Kim, Hyunsu Kim, Kwanhyung Lee, Juho Lee
To appear in ICLR 2026
ForestPersons: a large-scale dataset for under-canopy missing person detection
Deokyun Kim, Jeongjun Lee, Jungwon Choi, Jonggeon Park, Giyoung Lee, Yookyung Kim, Myungseok Ki, Juho Lee, Jihun Cha
To appear in ICLR 2026
Cost-sensitive freeze-thaw Bayesian optimization for efficient hyperparameter tuning
Dong Bok Lee, Aoxuan Silvia Zhang, Byungjoo Kim, Junhyeon Park, Steven Adriaensen, Juho Lee, Sung Ju Hwang, Hae Beom Lee
NeurIPS 2025
PANGEA: projection-based augmentation with non-relevant general data for enhanced domain adaptation in LLMs
Seungyoo Lee, Giung Nam, Moonseok Choi, Hyungi Lee†, Juho Lee†
NeurIPS 2025
Axial neural networks for dimension-free foundation models
Hyunsu Kim, Jonggeon Park, Joan Bruna, Hongseok Yang, Juho Lee
NeurIPS 2025 (spotlight)
Test time scaling for neural processes
Hyungi Lee, Moonseok Choi, Hyunsu Kim, Kyunghyun Cho, Rajesh Ranganath, Juho Lee
NeurIPS 2025
FedSVD: adaptive orthogonalization for private federated learning with LoRA
Seanie Lee, Sangwoo Park, Dong Bok Lee, Dominik Wagner, Haebin Seong, Tobias Bocklet, Juho Lee, Sung Ju Hwang
NeurIPS 2025
Reliable decision-making via calibrated retrieval-augmented generation
Chaeyun Jang, Deukhwan Cho, Seanie Lee, Hyungi Lee†, Juho Lee†
NeurIPS 2025
Compact memory for continual learning
Yohan Jung, Hyungi Lee, Wenlong Chen, Thomas Möllenhoff, Yingzhen Li, Juho Lee, Mohammad Emtiyaz Khan
NeurIPS 2025
Bayesian neural scaling laws extrapolation with prior-fitted networks
Dongwoo Lee, Dong Bok Lee, Steven Adriaensen, Juho Lee, Sung Ju Hwang, Frank Hutter, Seon Joo Kim, and Hae Beom Lee
ICML 2025
Ensemble distribution distillation via flow matching
Jonggeon Park, Giung Nam, Hyunsu Kim, Jongmin Yoon, and Juho Lee
ICML 2025
Active learning with selective time-step acquisition for PDEs
Yegon Kim, Hyunsu Kim, Gyeonghoon Ko, and Juho Lee
ICML 2025
StarFT: robust fine-tuning of zero-shot models via spuriosity alignment
Younghyun Kim*, Jongheon Jeong*, Sangkyung Kwak, Kyungmin Lee, Juho Lee and Jinwoo Shin
IJCAI 2025
Parameter expanded stochastic gradient Markov chain Monte Carlo
Hyunsu Kim, Giung Nam, Chulhee Yun, Hongseok Yang, and Juho Lee
ICLR 2025
Dimension agnostic neural processes
Hyungi Lee, Chaeyun Jang, Dong Bok Lee, and Juho Lee
ICLR 2025
Variational Bayesian pseudo-coreset
Hyungi Lee, Seungyoo Lee, and Juho Lee
ICLR 2025
Amortized control of continuous state space Feynman-Kac model for irregular time series
Byoungwoo Park, Hyungi Lee, and Juho Lee
ICLR 2025 (oral presentation)
Learning diverse attacks on large language models for robust red-teaming and safety tuning
Seanie Lee, Minsu Kim, Lynn Cherif, David Dobre, Juho Lee, Sung Ju Hwang, Kenji Kawaguchi, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, and Moksh Jain
ICLR 2025
HarmAug: effective data augmentation for knowledge distillation of safety guard Models
Seanie Lee, Haebin Seong, Dong Bok Lee, Minki Kang, Xiaoyin Chen, Dominik Wagner, Yoshua Bengio, Juho Lee, and Sung Ju Hwang
ICLR 2025
Model fusion through Bayesian optimization in language model fine-tuning
Chaeyun Jang*, Hyungi Lee*, Jungtaek Kim†, and Juho Lee†
NeurIPS 2024 (spotlight)
Ex Uno Pluria: insights on ensembling in low precision number systems
Giung Nam, Juho Lee
NeurIPS 2024
Learning infinitesimal generators of continuous symmetries from data
Gyeonghoon Ko, Hyunsu Kim, Juho Lee
NeurIPS 2024
Stochastic optimal control for diffusion bridges in function spaces
Byoungwoo Park, Jungwon Choi, Sungbin Lim†, Juho Lee†
NeurIPS 2024
Safeguard text-to-image diffusion models with human feedback inversion
Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, Juho Lee
ECCV 2024
Variational partial group convolutions for input-aware partial equivariance of rotations and color-shifts
Hyunsu Kim, Yegon Kim, Hongseok Yang, and Juho Lee
ICML 2024
A simple early exiting framework for accelerated sampling in diffusion models
Taehong Moon, Moonseok Choi, EungGu Yun, Jongmin Yoon, Gayoung Lee, Jaewoong Cho, and Juho Lee
ICML 2024
Learning to explore for stochastic gradient MCMC
Seunghyun Kim, Seohyeon Jung, Seonghyeon Kim, and Juho Lee
ICML 2024
Fast ensembling with diffusion Schrödinger bridge
Hyunsu Kim, Jongmin Yoon, and Juho Lee
ICLR 2024
Sparse weight averaging with multiple particles for iterative magnitude pruning
Moonseok Choi, Hyungi Lee, Giung Nam, and Juho Lee
ICLR 2024
Lipsum-FT: Robust fine-tuning of zero-shot models using random text guidance
Giung Nam, Byeongho Heo, and Juho Lee
ICLR 2024
Enhancing transfer learning with flexible nonparametric posterior sampling
Hyungi Lee, Giung Nam, Edwin Fong, and Juho Lee
ICLR 2024
Self-supervised dataset distillation for transfer learning
Dong Bok Lee, Seanie Lee, Joonho Ko, Kenji Kawaguchi, Juho Lee, and Sung Ju Hwang
ICLR 2024
Spear and shield: adversarial attacks and defense methods for model-based link prediction
on continuous-time dynamic graphs
Dongjin Lee, Juho Lee, and Kijung Shin
AAAI 2024
Function Space Bayesian pseudocoreset for Bayesian neural networks
Balhae Kim, Hyungi Lee, Juho Lee
NeurIPS 2023
Probabilistic imputation for time-series classification with missing data
SeungHyun Kim*, Hyunsu Kim*, Eunggu Yun*, Hwangrae Lee, Jaehun Lee, Juho Lee
ICML 2023
Traversing between modes in function space for fast ensembling
Eunggu Yun*, Hyungi Lee*, Giung Nam*, Juho Lee
ICML 2023
Regularizing towards soft equivariance under mixed symmetries
Hyunsu Kim, Hyungi Lee, Hongseok Yang, Juho Lee
ICML 2023
Scalable set encoding with universal mini-batch consistency and unbiased full set gradient approximation
Jeffrey Ryan Willette*, Seanie Lee*, Bruno Andreis, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang
ICML 2023
Martingale posterior neural processes
Hyungi Lee, Eunggu Yun, Giung Nam, Edwin Fong, Juho Lee
ICLR 2023 (Notable top 25%)
Decoupled training for long-tailed classification with stochastic representations
Giung Nam*, Sunguk Jang*, Juho Lee
ICLR 2023
A simple yet powerful deep active learning with snapshot ensembles
Seohyeon Jung*, Sanghyun Kim*, Juho Lee
ICLR 2023
Self-distillation for further pre-training of transformers
Seanie Lee, Minki Kang, Juho Lee, Sung Ju Hwang, Kenji Kawaguchi
ICLR 2023
Exploring the role of mean teachers in self-supervised masked auto-encoders
Youngwan Lee*, Jeffrey Ryan Willette*, Jonghee Kim, Juho Lee, Sung Ju Hwang
ICLR 2023
On divergence measures for Bayesian pseudocoresets
Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, Juho Lee
NeurIPS 2022
Set-based meta-interpolation for few-task meta-learning
Seanie Lee, Bruno Andreis, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang
NeurIPS 2022
Improving ensemble distillation with weight averaging and diversifying perturbation
Giung Nam, Hyungi Lee, Byeongho Heo, Juho Lee
ICML 2022
Code
Set based stochastic subsampling
Bruno Andreis, Seanie Lee, A. Tuan Nguyen, Juho Lee, Eunho Yang, Sung Ju Hwang
ICML 2022
Scale mixtures of neural network Gaussian processes
Hyungi Lee, Eunggu Yoon, Hongseok Yang, Juho Lee
ICLR 2022
Sequential Reptile: inter-task gradient alignment for multilingual learning
Seanie Lee, Hae Beom Lee, Juho Lee, Sung Ju Hwang
ICLR 2022
Meta learning low rank covariance factors for energy-based deterministic uncertainty
Jeffrey Ryan Willette, Hae Beom Lee, Juho Lee, Sung Ju Hwang
ICLR 2022
Diversity matters when learning from ensembles
Giung Nam*, Jongmin Yoon*, Yoonho Lee, Juho Lee
NeurIPS 2021
Code
Mini-batch consistent slot set encoder for scalable set encoding
Bruno Andreis, Jeffrey Ryan Willette, Juho Lee, Sung Ju Hwang
NeurIPS 2021
A multi-mode modulator for multi-domain few-shot classification
Yanbin Liu, Juho Lee, Linchao Zhu, Ling Chen, Humphrey Shi, Yi Yang
ICCV 2021
Adversarial purification with score-based generative models
Jongmin Yoon, Sung Ju Hwang, Juho Lee
ICML 2021
Code
Learning to perturb word embeddings for out-of-distribution QA
Seanie Lee, Minki Kang, Juho Lee, Sung Ju Hwang
ACL 2021
SetVAE: learning hierarchical composition for generative modeling of set-structured data
Jinwoo Kim, Jaehoon Yoo, Juho Lee, Seunghoon Hong
CVPR 2021
Code
Bootstrapping neural processes
Juho Lee*, Yoonho Lee*, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh
Code
NeurIPS 2020
Neural complexity measures
Yoonho Lee, Juho Lee, Sung Ju Hwang, Eunho Yang, Seungjin Choi
Code
NeurIPS 2020
Cost-effective interactive attention learning with neural attention processes
Jay Heo, Junhyeon Park, Hyewon Jeong, Kwang Joon Kim, Juho Lee, Eunho Yang, Sung Ju Hwang
ICML 2020
Deep mixed effect model using Gaussian processes: a personalized and reliable prediction for
healthcare
Ingyo Chung, Saehoon Kim, Juho Lee, Sung Ju Hwang, Eunho Yang
AAAI 2020
Beyond the Chinese restaurant and Pitman-Yor processes: statistical models with double power-law behavior
Fadhel Ayed*, Juho Lee*, and François Caron
ICML 2019 (long oral presentation)
Set transformer: a framework for attention-based permutation-invariant neural networks
Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, and Yee Whye Teh
Code
ICML 2019
Learning to propagate labels: transductive propagation network for few-shot learning
Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang and Yi Yang
ICLR 2019
A Bayesian model for sparse graphs with flexible degree distribution and overlapping community structure
Juho Lee, Lancelot F. James, Seungjin Choi, and François Caron
AISTATS 2019 (oral presentation)
Code
Uncertainty-aware attention for reliable interpretation and prediction
Jay Heo*, Hae Beom Lee*, Saehoon Kim, Juho Lee, Kwang Joon Kim, Eunho Yang, and Sung Ju
Hwang
NeurIPS 2018
Dropmax: adaptive variational softmax
Hae Beom Lee, Juho Lee, Saehoon Kim, Eunho Yang, and Sung Ju Hwang
NeurIPS 2018
Code
Bayesian inference on random simple graphs with power law degree distributions
Juho Lee, Creighton Heakulani, Zoubin Ghahramani, Lancelot F. James, and Seingjin Choi
ICML 2017
Code
Finite-dimensional BFRY priors and variational Bayesian inference for power law models
Juho Lee, Lancelot F. James, and Seungjin Choi
NIPS 2016
Tree-guided MCMC inference for normalized random measure mixture models
Juho Lee and Seungjin choi
NIPS 2015
Code
Bayesian hierarchical clustering with exponential family: small-variance asymptotics and reducibility
Juho Lee and Seungjin Choi
AISTATS 2015
Incremental tree-based inference with dependent normalized random measures
Juho Lee and Seungjin Choi
AISTATS 2014
Online video segmentation by Bayesian split-merge clustering
Juho Lee, Suha Kwak, Bohyung Han, and Seungjin Choi
ECCV 2012
Early exiting for accelerated inference in diffusion models
Taehong Moon, Moonseok Choi, EungGu Yun, Jongmin Yoon, Gayoung Lee, Juho Lee
ICML 2023 workshop on Structured Probabilistic Inference & Generative Modeling, 2023
Function space Bayesian pseudocoreset for Bayesian neural networks
Balhae Kim, Hyungi Lee, Juho Lee
ICML 2023 workshop on Structured Probabilistic Inference & Generative Modeling, 2023
Towards safe self-distillation of internet-scale text-to-image diffusion models
Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, Juho Lee
ICML 2023 Workshop on Challenges in Deployable Generative AI, 2023
Modeling uplift from observational time-series in continual scenarios
Sanghyun Kim, Jungwon Choi, NamHee Kim, Jaesung Ryu, Juho Lee
AAAI23 Bridge on Continual Casality (oral presentation), 2023
Fine-tuning diffusion models with limited data
Taehong Moon, Moonseok Choi, Gayoung Lee, Jung-Woo Ha, Juho Lee
NeurIPS 2022 Workshop on Score-Based Methods, 2022
Adaptive strategy for resetting a non-stationary Markov chain during learning via joint stochastic optimization
Hyunsu Kim, Juho Lee, Hongseok Yang
Third Symposium on Advances in Approximate Bayesian Inference, 2021
Towards deep amortized clustering
Juho Lee, Yoonho Lee, Yee Whye Teh
NeurIPS 2019 Sets & Partitions workshop (contributed talk), 2019
Graph embedding VAE: a permutation invariant model of graph structure
Tony Duan, Juho Lee
NeurIPS 2019 Graph Representation Learning workshop, 2019