About
Blog
Pills
Software
Trainings
Seminar
⚲
Search results
We
identify
,
test
and
disseminate
established and emerging techniques in machine learning in order to provide practitioners with the best tools for their applications
What we do
Software
Paper implementations and code libraries developed during our research and supporting our work, open sourced
Research feed
Read our paper pills: summaries of publications, talks or new software, with context and an explanation of the main ideas and key results.
Trainings
Our hands-on, one to three day workshops cover fundamental and advanced topics in machine learning.
Our blog
We publish introductory posts for beginners and paper digests for experienced people on a tight schedule.
Our areas of interest
Safety and reliability in ML systems
Efficient Machine Learning
Trustworthy and interpretable ML
Advances and fundamentals in ML
Our latest work
Training
Verifying Systems in the Face of Uncertainty
A 1-day workshop introducing the concept of probabilistic model checking and its applications with the library Storm.
Blog
Applications of data valuation in machine learning
At TransferLab we have extensively covered existing and developing methods for Data valuation, the task of attributing value to samples in a …
Blog
FLAME 2023: Diving into the Future of Fluid Dynamics and Machine Learning
At the Stanford FLAME AI Workshop 2023, I immersed myself in the intersection of machine learning and fluid dynamics, benefiting from …
Blog
AutoDev: Exploring Custom LLM-Based Coding Assistance Functions
We explore the potential of custom code assistant functions based on large language models (LLMs). With our open-source software package …
Training
Safe and efficient deep reinforcement learning
This 2-day dive into deep RL for decision and control is suitable for engineers who want to solve real-world control problems with efficient …
Research feed:
Paper pills
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models
Accurately computing influence functions involves solving inverse Hessian problems, a challenging task as the parameter count increases, …
Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value
The out-of-bag (OOB) error estimate is a scalable approach to data valuation. Unlike marginal contribution methods, Data-OOB can leverage …
Exploiting past success in Off-Policy Actor-Critic
Extracting knowledge from previously gathered data is the very core of exploitation in reinforcement learning (RL). During training, a …
Studying Large Language Model Generalization with Influence Functions
Influence functions are a tool to quantify the impact of each training sample on a model’s predictions, thereby assisting in the …
See more...