All sources cited or reviewed

This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line

### References

[Lam23G]

Learning skillful medium-range global weather forecasting.,

[Wat23A]

Accelerated Shapley Value Approximation for Data Evaluation,

[Bar23R]

Representation Equivalent Neural Operators: a Framework for Alias-free Operator Learning,

[Maz23D]

DataPerf: Benchmarks for Data-Centric AI Development,

[Rao23C]

Convolutional Neural Operators for robust and accurate learning of PDEs,

[Wu23V]

Variance reduced Shapley value estimation for trustworthy data valuation,

[Geo23N]

NNGeometry,

[Kwo23D]

DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models,

[Fis23S]

Statistical and Computational Guarantees for Influence Diagnostics,

[Zha23B]

BelNet: basis enhanced learning, a mesh-free neural operator,

[Ras23W]

WeatherBench 2: A benchmark for the next generation of data-driven global weather models,

[Gro23S]

Studying Large Language Model Generalization with Influence Functions,

[Zha23S]

A Survey of Data Pricing for Data Marketplaces,

[Def23L]

Learning-rate-free learning by D-Adaptation,

[Ruh23G]

Geometric clifford algebra networks,

[Bol23A]

Advancing Methods and Applicability of Simulation-Based Inference in Neuroscience,

[Kwo23D]

Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value,

[Bi23A]

Accurate medium-range global weather forecasting with 3D neural networks,

[Sch23D]

Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values,

[Mas23R]

The rise of machine learning in weather forecasting,

[Mis23P]

Prodigy: An expeditiously adaptive parameter-free learner,

[Ji23S]

Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic,

[Eim23H]

Hyperparameters in Reinforcement Learning and How To Tune Them,

[Wil23F]

Flow Matching for Scalable Simulation-Based Inference,

[Uch23J]

Jump-Start Reinforcement Learning,

[Zha23O]

Optimal Sparse Regression Trees,

[Zhu23F]

Fine-Tuning Language Models with Advantage-Induced Policy Alignment,

[Lig23L]

Let's Verify Step by Step,

[Raf23D]

Direct Preference Optimization: Your Language Model is Secretly a Reward Model,

[Glo23A]

Adversarial robustness of amortized Bayesian inference,