All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Kol23S]
Towards a statistical theory of data selection under weak supervision,
[Fis23S]
Statistical and Computational Guarantees for Influence Diagnostics,
[Zha23B]
BelNet: basis enhanced learning, a mesh-free neural operator,
[Ras23W]
WeatherBench 2: A benchmark for the next generation of data-driven global weather models,
[Gro23S]
Studying Large Language Model Generalization with Influence Functions,
[Zha23S]
A Survey of Data Pricing for Data Marketplaces,
[Def23L]
Learning-rate-free learning by D-Adaptation,
[Wil23F]
Flow Matching for Scalable Simulation-Based Inference,
[Ruh23G]
Geometric clifford algebra networks,
[Bol23A]
Advancing Methods and Applicability of Simulation-Based Inference in Neuroscience,
[Ton23I]
Improving and generalizing flow-based generative models with minibatch optimal transport,
[Gef23C]
Compositional Score Modeling for Simulation-Based Inference,
[Kwo23D]
Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value,
[Wei23G]
Graphically Structured Diffusion Models,
[Bi23A]
Accurate medium-range global weather forecasting with 3D neural networks,
[Sch23D]
Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values,
[Mas23R]
The rise of machine learning in weather forecasting,
[Mis23P]
Prodigy: An expeditiously adaptive parameter-free learner,
[Ji23S]
Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic,
[Zhu23F]
Fine-Tuning Language Models with Advantage-Induced Policy Alignment,
[Eim23H]
Hyperparameters in Reinforcement Learning and How To Tune Them,
[Raf23D]
Direct Preference Optimization: Your Language Model is Secretly a Reward Model,
[Glo23A]
Adversarial robustness of amortized Bayesian inference,
[Liu23S]
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training,
[Yao23T]
Tree of Thoughts: Deliberate Problem Solving with Large Language Models,
[Her23I]
Impact of explainable AI on cognitive load: Insights from an empirical study,
[Fis23I]
Influence Diagnostics under Self-concordance,