All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Kar20M]
Model-Agnostic Counterfactual Explanations for Consequential Decisions,
[Pru20E]
Estimating Training Data Influence by Tracing Gradient Descent,
[Won20W]
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations,
[Wan20U]
Understanding and mitigating gradient pathologies in physics-informed neural networks,
[Nac20R]
Reinforcement Learning via Fenchel-Rockafellar Duality,
[Sal20C]
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks,
[Zie20F]
Fine-Tuning Language Models from Human Preferences,
[Che20A]
Adaptive basis construction and improved error estimation for parametric nonlinear dynamical systems,
[Cho20P]
Probabilistic Circuits: A Unifying Framework for Tractable Probabilistic Models,
[Cra20D]
Discovering symbolic models from deep learning with inductive biases,
[Fel20W]
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation,
[Fuk20L]
Limitations of physics informed machine learning for nonlinear two-phase transport inn porous media,
[Gu20H]
HiPPO: Recurrent Memory with Optimal Polynomial Projections,
[Hat20F]
Faster AutoAugment: Learning Augmentation Strategies Using Backpropagation,
[Kir20W]
Why normalizing flows fail to detect out-of-distribution data: 34th Conference on Neural Information Processing Systems, NeurIPS 2020,
[Muk20C]
Calibrating deep neural networks using focal loss,
[Tao20M]
Measuring robustness to natural distribution shifts in image classification,
[Vel20G]
Graph Attention Networks,
[Wan20P]
A Principled Approach to Data Valuation for Federated Learning,
[Wil20B]
Bayesian deep learning and a probabilistic perspective of generalization,
[Yoo20D]
Data Valuation using Reinforcement Learning,
[Agr19D]
Differentiable convex optimization layers,
[Pas19P]
PyTorch: An Imperative Style, High-Performance Deep Learning Library,
[Pos19S]
Sampling-free Epistemic Uncertainty Estimation Using Approximated Variance Propagation,