All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Sah18L]
Learning Equations for Extrapolation and Control,
[Che18L]
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation,
[Kim18I]
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV),
[Sha18S]
On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples,
[Rib18A]
Anchors: High-Precision Model-Agnostic Explanations,
[Li18M]
Measuring the Intrinsic Dimension of Objective Landscapes,
[Rai18F]
Forward-Backward Stochastic Neural Networks: Deep Learning of High-dimensional Partial Differential Equations,
[Zha18U]
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,
[Pap18F]
Fast $\epsilon$-free Inference of Simulation Models with Bayesian Conditional Density Estimation,
[E18D]
The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems,
[Mad18D]
Towards Deep Learning Models Resistant to Adversarial Attacks,
[Rag18C]
Certified Defenses against Adversarial Examples,
[Gua18T]
Test Error Estimation after Model Selection Using Validation Error,
[Con18W]
Word Translation Without Parallel Data,
[Mcc18R]
Robustly representing uncertainty in deep neural networks through sampling,
[Heu18G]
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,
[Haa18aS]
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor,
[Rag18S]
Semidefinite relaxations for certifying robustness to adversarial examples,
[Vol18F]
Fast Dynamic Fault Tree Analysis by Model Checking Techniques,
[Adi18T]
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring,
[Ang18L]
Learning Certifiably Optimal Rule Lists for Categorical Data,
[Dea18E]
End-to-End Differentiable Physics for Learning and Control,
[Dom18I]
Importance Weighting and Variational Inference,
[Geo18F]
Fast Approximate Natural Gradient Descent in a Kronecker Factored Eigenbasis,
[Nes18L]
Lectures on convex optimization,
[Ran18D]
Deep State Space Models for Time Series Forecasting,
[Rit18S]
A Scalable Laplace Approximation for Neural Networks,
[Sin18S]
Stackelberg Security Games: Looking Beyond a Decade of Success,
[Wan18E]
Exponentially Weighted Imitation Learning for Batched Historical Data,