"preprint"

Statistical Inference with Stochastic Gradient Algorithms

Stochastic gradient algorithms are widely used for large-scale learning and inference problems. However, their use in practice is typically guided by heuristics and trial-and-error rather than rigorous, generalizable theory. We take a step toward …

Relaxing the IID Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization

We consider sequential prediction with expert advice when data are generated from distributions varying arbitrarily within an unknown constraint set. We quantify relaxations of the classical i.i.d. assumption in terms of these constraint sets, with …

Optimal Scaling and Shaping of Random Walk Metropolis via Diffusion Limits of Block-IID Targets

This work extends Roberts et al. (1997) by considering limits of Random Walk Metropolis (RWM) applied to block IID target distributions, with corresponding block-independent proposals. The extension verifies the robustness of the optimal scaling …