All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Igl19G]
Generalization in reinforcement learning with selective noise injection and information bottleneck,
[Bak19D]
DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning,
[Bak19F]
On Fairness in Budget-Constrained Decision Making,
[And16L]
Learning to learn by gradient descent by gradient descent,
[Edu18U]
Understanding Back-Translation at Scale,
[Kar21S]
A Style-Based Generator Architecture for Generative Adversarial Networks,
[Nag17N]
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning,
[Rad17D]
Data Distillation: Towards Omni-Supervised Learning,
[Rib16W]
"Why Should I Trust You?": Explaining the Predictions of Any Classifier,
[Kle17F]
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets,
[Kra17C]
The Case for Learned Index Structures,
[Sha17O]
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,
[Bis16G]
A general framework for updating belief distributions,
[Bud21C]
On Correctness, Precision, and Performance in Quantitative Verification,
[Can22I]
Investigating the Impact of Model Misspecification in Neural Simulation-based Inference,
[Gao23G]
Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation,
[Gef23C]
Compositional Score Modeling for Simulation-Based Inference,
[Gir22C]
CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness,
[Kat19M]
The Marabou Framework for Verification and Analysis of Deep Neural Networks,
[Kat17R]
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks,
[Laf01C]
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,
[Lun17U]
A Unified Approach to Interpreting Model Predictions,
[Mul22T]
The third international verification of neural networks competition (VNN-COMP 2022): Summary and results,
[Nau21N]
Neural Prototype Trees for Interpretable Fine-Grained Image Recognition,