All sources cited or reviewed
This is a list of all sources we have used in the TransferLab, with links to the referencing content and metadata, like accompanying code, videos, etc. If you think we should look at something, drop us a line
References
[Pan21D]
Deep Learning for Anomaly Detection: A Review,
[Mar21P]
Parametric Complexity Bounds for Approximating PDEs with Neural Networks,
[Hul21A]
Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods,
[Lu21L]
Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators,
[Bod21B]
Benchmarking and Survey of Explanation Methods for Black Box Models,
[Jac21H]
How machine-learning recommendations influence clinician treatment selections: The example of antidepressant selection,
[Shi21E]
The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI,
[Van21E]
Evaluating XAI: A comparison of rule-based and example-based explanations,
[Bur21S]
A Survey on the Explainability of Supervised Machine Learning,
[Son21S]
Score-Based Generative Modeling through Stochastic Differential Equations,
[Cho21U]
On the use of simulation in robotics: Opportunities, challenges, and suggestions for moving forward,
[Um21S]
Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers,
[Lu21L]
lululxvi/deepxde,
[Pap21N]
Normalizing flows for probabilistic modeling and inference,
[Okh21M]
A Multilinear Sampling Algorithm to Estimate Shapley Values,
[Aga21D]
Deep Reinforcement Learning at the Edge of the Statistical Precipice,
[Alu21D]
Does explainable artificial intelligence improve human decision-making?,
[Bud21C]
On Correctness, Precision, and Performance in Quantitative Verification,
[Car21E]
Emerging Properties in Self-Supervised Vision Transformers,
[Che21E]
Exploring Simple Siamese Representation Learning,
[Dha21D]
Diffusion Models Beat GANs on Image Synthesis,
[Dwi21G]
Graph Neural Networks with Learnable Structural and Positional Representations,
[Fry21S]
Shapley Values for Feature Selection: The Good, the Bad, and the Axioms,
[Gib21A]
Adaptive Conformal Inference Under Distribution Shift,
[Izm21W]
What Are Bayesian Neural Network Posteriors Really Like?,
[Jia21S]
Scalability vs. Utility: Do We Have To Sacrifice One for the Other in Data Importance Quantification?,
[Kam21E]
Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning,
[Kre21R]
Rethinking Graph Transformers with Spectral Attention,
[Kum21S]
Shapley Residuals: Quantifying the limits of the Shapley value for explanations,