All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Gri20B]
Bootstrap your own latent: A new approach to self-supervised Learning,
[Tej20S]
sbi: A toolkit for simulation-based inference,
[Nix20M]
Measuring Calibration in Deep Learning,
[Kol20H]
How to Exploit Structure while Solving Weighted Model Integration Problems,
[Peh20R]
Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning,
[Wu20S]
Stronger and Faster Wasserstein Adversarial Attacks,
[Wan20W]
When and why PINNs fail to train: A neural tangent kernel perspective,
[Din20R]
Revisiting the Evaluation of Uncertainty Estimation and Its Application to Explore Model Complexity-Uncertainty Trade-Off,
[Bra20S]
Single Shot MC Dropout Approximation,
[Kri20B]
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks,
[Rib20A]
Beyond Accuracy: Behavioral Testing of NLP Models with CheckList,
[And20W]
What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study,
[Shi20C]
On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs,
[Maz20L]
Leveraging exploration in off-policy algorithms via normalizing flows,
[Kob20N]
Normalizing Flows: An Introduction and Review of Current Methods,
[Hu20I]
Improved Image Wasserstein Attacks and Defenses,
[Wan20L]
Less Is Better: Unweighted Data Subsampling via Influence Function,
[Qiu20Q]
Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel,
[Buc20P]
Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems,
[Mce20S]
Statistical Rethinking: A Bayesian Course with Examples in R and Stan,
[Kar20M]
Model-Agnostic Counterfactual Explanations for Consequential Decisions,
[Pru20E]
Estimating Training Data Influence by Tracing Gradient Descent,
[Won20W]
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations,
[Wan20U]
Understanding and mitigating gradient pathologies in physics-informed neural networks,
[Nac20R]
Reinforcement Learning via Fenchel-Rockafellar Duality,
[Sal20C]
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks,
[Zie20F]
Fine-Tuning Language Models from Human Preferences,
[Che20A]
Adaptive basis construction and improved error estimation for parametric nonlinear dynamical systems,
[Che20A]
Adaptive basis construction and improved error estimation for parametric nonlinear dynamical systems,