<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Content feed of the TransferLab — appliedAI Institute</title><link>https://transferlab.ai/</link><description>All updates by the TransferLab team</description><generator>Hugo -- gohugo.io</generator><language>en-gb</language><copyright>appliedAI Institute for Europe gGmbH</copyright><atom:link href="https://transferlab.ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Agentic Engineering</title><link>https://transferlab.ai/trainings/agentic-engineering/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><author>Elena Hernandez Martinez</author><author>Tim Mensinger</author><author>Kristof Schröder</author><guid>https://transferlab.ai/trainings/agentic-engineering/</guid><description>Learn practical agentic engineering with Claude Code: plan, steer, and verify AI-generated code changes through hands-on exercises covering context management, customization, and reliable coding-agent workflows.</description></item><item><title>Variational Inference-Based Adversarial Domain Adaptation</title><link>https://transferlab.ai/pills/2026/variational-inference-adversarial-domain-adaptation/</link><pubDate>Fri, 16 Jan 2026 00:00:00 +0000</pubDate><author>Elena Hernandez Martinez</author><guid>https://transferlab.ai/pills/2026/variational-inference-adversarial-domain-adaptation/</guid><description>Deep learning models often fail when trained on one dataset and deployed on another, a problem known as &amp;lt;strong&amp;gt;domain shift&amp;lt;/strong&amp;gt;. The paper &amp;lt;em&amp;gt;Variational Inference-Based Adversarial Domain Adaptation (VIADA)&amp;lt;/em&amp;gt; &amp;lt;span class=&amp;#34;citation&amp;#34;&amp;gt;[&amp;lt;a tabindex=&amp;#34;0&amp;#34; class=&amp;#34;cite-label&amp;#34; data-bs-toggle=&amp;#34;popover&amp;#34; data-bs-trigger=&amp;#34;focus&amp;#34; data-bs-html=&amp;#34;true&amp;#34; data-bs-placement=&amp;#34;top&amp;#34; data-bs-content=&amp;#34;&amp;amp;lt;div class=&amp;amp;#34;citation-block&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;/refs/zonoozi_variational_2024&amp;amp;#34; class=&amp;amp;#34;citation-link&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-title&amp;amp;#34;&amp;amp;gt;Variational inference based adversarial domain adaptation&amp;amp;lt;/span&amp;amp;gt;,
&amp;amp;lt;span class=&amp;amp;#34;citation-authors&amp;amp;#34;&amp;amp;gt;Mahta Hassan Pour Zonoozi, Vahid Seydi, Mahmood Deypir.
&amp;amp;lt;/span&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-publication&amp;amp;#34;&amp;amp;gt;Pattern Analysis and Applications&amp;amp;lt;/span&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-date&amp;amp;#34;&amp;amp;gt;(2024)&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;&amp;amp;lt;ul class=&amp;amp;#34;citation-links&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;li class=&amp;amp;#34;badge bg-light rounded-pill py-1&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;https://doi.org/10.1007/s10044-024-01325-5&amp;amp;#34; target=&amp;amp;#34;_blank&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;i class=&amp;amp;#34;icon-file-text&amp;amp;#34; title=&amp;amp;#34;Publication&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;/i&amp;amp;gt;
&amp;amp;lt;span class=&amp;amp;#34;citation-link-text&amp;amp;#34;&amp;amp;gt;Publication&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;
&amp;amp;lt;/li&amp;amp;gt;&amp;amp;lt;/ul&amp;amp;gt;&amp;amp;lt;/div&amp;amp;gt;&amp;#34;&amp;gt;Zon24V&amp;lt;/a&amp;gt;]&amp;lt;/span&amp;gt; proposes a method to address this issue by combining the probabilistic structure of &amp;lt;strong&amp;gt;Variational Autoencoders (VAEs)&amp;lt;/strong&amp;gt; with the discriminative power of &amp;lt;strong&amp;gt;adversarial learning&amp;lt;/strong&amp;gt;, improving on previous &amp;lt;strong&amp;gt;unsupervised domain adaptation (UDA)&amp;lt;/strong&amp;gt; techniques. This pill summarizes the key contributions of VIADA and discusses its relevance for addressing model misspecification in Simulation-Based Inference (SBI).</description></item><item><title>Robust Simulation-Based Inference Under Missing Data via Neural Processes</title><link>https://transferlab.ai/pills/2025/robust-simulation-based-inference-under-missing-data/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><author>Jan Teusen</author><guid>https://transferlab.ai/pills/2025/robust-simulation-based-inference-under-missing-data/</guid><description>This pill presents a recent paper by Verna et al. &amp;lt;span class=&amp;#34;citation&amp;#34;&amp;gt;[&amp;lt;a tabindex=&amp;#34;0&amp;#34; class=&amp;#34;cite-label&amp;#34; data-bs-toggle=&amp;#34;popover&amp;#34; data-bs-trigger=&amp;#34;focus&amp;#34; data-bs-html=&amp;#34;true&amp;#34; data-bs-placement=&amp;#34;top&amp;#34; data-bs-content=&amp;#34;&amp;amp;lt;div class=&amp;amp;#34;citation-block&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;/refs/verma_robust_2025&amp;amp;#34; class=&amp;amp;#34;citation-link&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-title&amp;amp;#34;&amp;amp;gt;Robust Simulation-Based Inference under Missing Data via Neural Processes&amp;amp;lt;/span&amp;amp;gt;,
&amp;amp;lt;span class=&amp;amp;#34;citation-authors&amp;amp;#34;&amp;amp;gt;Yogesh Verma, Ayush Bharti, Vikas Garg.
&amp;amp;lt;/span&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-date&amp;amp;#34;&amp;amp;gt;(2025)&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;&amp;amp;lt;ul class=&amp;amp;#34;citation-links&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;li class=&amp;amp;#34;badge bg-light rounded-pill py-1&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;http://arxiv.org/abs/2503.01287&amp;amp;#34; target=&amp;amp;#34;_blank&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;i class=&amp;amp;#34;icon-file-text&amp;amp;#34; title=&amp;amp;#34;Publication&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;/i&amp;amp;gt;
&amp;amp;lt;span class=&amp;amp;#34;citation-link-text&amp;amp;#34;&amp;amp;gt;Publication&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;
&amp;amp;lt;/li&amp;amp;gt;&amp;amp;lt;li class=&amp;amp;#34;badge bg-light rounded-pill py-1&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;https://github.com/Aalto-QuML/RISE&amp;amp;#34;target=&amp;amp;#34;_blank&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;i class=&amp;amp;#34;icon-code-fork&amp;amp;#34; title=&amp;amp;#34;Code&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;/i&amp;amp;gt;
&amp;amp;lt;span class=&amp;amp;#34;citation-link-text&amp;amp;#34;&amp;amp;gt;Code&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;
&amp;amp;lt;/li&amp;amp;gt;&amp;amp;lt;/ul&amp;amp;gt;&amp;amp;lt;/div&amp;amp;gt;&amp;#34;&amp;gt;Ver25R&amp;lt;/a&amp;gt;]&amp;lt;/span&amp;gt;, introducing a novel method for handling missing data in simulation-based inference (SBI). Missing data is common in real-world applications and capable of severely biasing SBI results. However, it has received relatively little attention in the SBI literature, with only a few papers directly addressing it. RISE uses neural processes to explicitly model common patterns of missingness in the data and combines it with neural posterior estimation (NPE) to provide a robust and efficient (amortized) solution. We highlight the problem, introduce RISE&amp;amp;rsquo;s core concepts, and showcase its advantages, directing readers to the original paper for a closer look.</description></item><item><title>ChatGPT Power User Training</title><link>https://transferlab.ai/trainings/chatgpt/</link><pubDate>Tue, 23 Sep 2025 00:00:00 +0000</pubDate><author>Janoś Gabler</author><author>Tim Mensinger</author><guid>https://transferlab.ai/trainings/chatgpt/</guid><description>At first glance, ChatGPT looks deceptively simple: A chat box where you type a question and get an answer. But this simplicity hides a powerful system that can transform how professionals across all roles work. Whether you are a data scientist, project manager, or marketing specialist, this training equips you with the skills and understanding you need to become a ChatGPT power user.</description></item><item><title>Practical Introduction to Agentic AI</title><link>https://transferlab.ai/trainings/practical-agentic-ai/</link><pubDate>Wed, 18 Jun 2025 00:00:00 +0000</pubDate><author>Maternus Herold</author><guid>https://transferlab.ai/trainings/practical-agentic-ai/</guid><description>Master foundational architectural patterns, tool integration, and hands-on implementation with popular libraries and practical considerations for agentic AI development.</description></item><item><title>Towards a statistical theory of data selection under weak supervision</title><link>https://transferlab.ai/seminar/2024/towards-a-statistical-theory-of-data-selection-under-weak-supervision/</link><pubDate>Wed, 30 Oct 2024 19:00:00 +0300</pubDate><author>Pulkit Tandon</author><guid>https://transferlab.ai/seminar/2024/towards-a-statistical-theory-of-data-selection-under-weak-supervision/</guid><description>Pulkit Tandon, research engineer at Granica, will present his work on data selection, showing how using surrogate models to select subsamples of a data set for labeling can improve training efficiency and performance.</description></item><item><title>Introduction to Reduced Order Modeling</title><link>https://transferlab.ai/seminar/2024/introduction-to-reduced-order-modeling/</link><pubDate>Thu, 17 Oct 2024 16:00:00 +0200</pubDate><author>Sridhar Chellappa</author><guid>https://transferlab.ai/seminar/2024/introduction-to-reduced-order-modeling/</guid><description>Sridhar Chellappa will introduce the concept of reduced order modeling (ROM), a technique used in the field of simulation and AI to reduce the complexity of mathematical models. The seminar will cover the basics of ROM, its applications, and a lead up to more ML-flavoured approaches.</description></item><item><title>Recent Advancements in Tractable Probabilistic Inference</title><link>https://transferlab.ai/seminar/2024/recent-advancements-in-tractable-probabilistic-inference/</link><pubDate>Thu, 26 Sep 2024 16:00:00 +0200</pubDate><author>Antonio Vergari</author><guid>https://transferlab.ai/seminar/2024/recent-advancements-in-tractable-probabilistic-inference/</guid><description>Antonio Vergari will give an overview about recent advancements in tractable probabilistic inference.</description></item><item><title>Generalized Stability Guaranteed Quadratic Embeddings for Nonlinear Dynamical Systems</title><link>https://transferlab.ai/seminar/2024/generalized-stability-guaranteed-quadratic-embeddings-for-nonlinear-dynamical-systems/</link><pubDate>Thu, 12 Sep 2024 10:00:00 +0200</pubDate><author>Pawan Goyal</author><guid>https://transferlab.ai/seminar/2024/generalized-stability-guaranteed-quadratic-embeddings-for-nonlinear-dynamical-systems/</guid><description>Pawan Goyal, Senior AI Engineer at appliedAI, will present a recent work in physics-enhanced machine learning on generalized quadratic embeddings for nonlinear dynamics.</description></item><item><title>optimagic: unifying the numerical optimization ecosystem in Python</title><link>https://transferlab.ai/software/optimagic/</link><pubDate>Mon, 09 Sep 2024 00:00:00 +0000</pubDate><author>Janoś Gabler</author><author>Tim Mensinger</author><guid>https://transferlab.ai/software/optimagic/</guid><description>&amp;lt;em&amp;gt;optimagic&amp;lt;/em&amp;gt; is a Python package for numerical optimization. It is a unified interface to optimizers from SciPy, NlOpt and other packages.
optimagic&amp;amp;rsquo;s &amp;lt;code&amp;gt;minimize&amp;lt;/code&amp;gt; function works just like SciPy&amp;amp;rsquo;s, so you don&amp;amp;rsquo;t have to adjust your code. You simply get more optimizers for free. On top you get diagnostic tools, parallel numerical derivatives and more.</description></item><item><title>Stochastic Optimal Control Matching</title><link>https://transferlab.ai/seminar/2024/stochastic-optimal-control-matching/</link><pubDate>Thu, 05 Sep 2024 16:00:00 +0200</pubDate><author>Carles Domingo-Enrich</author><guid>https://transferlab.ai/seminar/2024/stochastic-optimal-control-matching/</guid><description>Carles Domingo-Enrich will present his work on Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models.</description></item><item><title>From Theory to Practice: Neural Operators Transforming Acoustics</title><link>https://transferlab.ai/seminar/2024/neural-operators-transforming-acoustics/</link><pubDate>Thu, 18 Jul 2024 10:00:00 +0200</pubDate><author>Jakob Wagner</author><guid>https://transferlab.ai/seminar/2024/neural-operators-transforming-acoustics/</guid><description>Jakob Wagner, Junior AI Researcher at the appliedAI Institute for Europe, will talk about the topic of his Master&amp;amp;rsquo;s thesis, conducted in collaboration with TUM, on applying neural operators to real-world acoustic problems.</description></item><item><title>CLP-Transfer: Cross-Lingual and Progressive Transfer Learning</title><link>https://transferlab.ai/pills/2024/clp-transfer/</link><pubDate>Mon, 15 Jul 2024 00:00:00 +0000</pubDate><author>Anes Benmerzoug</author><guid>https://transferlab.ai/pills/2024/clp-transfer/</guid><description>The CLP-Transfer method introduces a novel approach for cross-lingual language transfer by leveraging token overlap and a small pre-trained model with the desired tokenizer, simplifying the transfer process without the need for fastText embeddings or bilingual dictionaries. Despite its practical advantages, the method&amp;amp;rsquo;s performance on downstream tasks is limited, highlighting areas for future research and evaluation.</description></item><item><title>Mesh-Independent Operator Learning for Partial Differential Equations</title><link>https://transferlab.ai/pills/2024/mesh-independent-operator-learning/</link><pubDate>Thu, 11 Jul 2024 00:00:00 +0000</pubDate><author>Samuel Burbulla</author><guid>https://transferlab.ai/pills/2024/mesh-independent-operator-learning/</guid><description>The mesh-independent neural operator (MINO) is a fully attentional architecture for operator learning that allows to represent the discretized system as a set-valued data without a prior structure.</description></item><item><title>Symmetry Teleportation for Accelerated Optimization</title><link>https://transferlab.ai/pills/2024/symmetry-teleportation/</link><pubDate>Tue, 09 Jul 2024 00:00:00 +0000</pubDate><author>Faried Abu Zaid</author><author>Kristof Schröder</author><guid>https://transferlab.ai/pills/2024/symmetry-teleportation/</guid><description>A novel approach, symmetry teleportation, enhances convergence speed in gradient-based optimization by allowing parameters to traverse large distances on the loss level set by exploiting symmetries in the loss landscape.</description></item><item><title>All-in-One Simulation-Based Inference</title><link>https://transferlab.ai/pills/2024/all-in-one-simulation-based-inference/</link><pubDate>Fri, 28 Jun 2024 00:00:00 +0000</pubDate><author>Jan Teusen</author><guid>https://transferlab.ai/pills/2024/all-in-one-simulation-based-inference/</guid><description>This paper presents a new SBI algorithm that utilizes transformer architectures and score-based diffusion models. Unlike traditional approaches, it can estimate the posterior, the likelihood, and other arbitrary conditionals once trained. It also handles missing data, leverages known dependencies in the simulator, and performs well on common benchmarks.</description></item><item><title>Unravelling complexity in neuronal time-series data with BunDLe-Net</title><link>https://transferlab.ai/seminar/2024/unravelling-complexity-in-neuronal-time-series-data-with-bunlde-net/</link><pubDate>Thu, 27 Jun 2024 16:00:00 +0100</pubDate><author>Akshey Kumar</author><guid>https://transferlab.ai/seminar/2024/unravelling-complexity-in-neuronal-time-series-data-with-bunlde-net/</guid><description>Akshey Kumar, postdoctoral member of the Neuroinformatics research group at TU Vienna, will talk about BunDLe-Net, a manifold-learning algorithm that effectively preserves relevant information while abstracting away details that are irrelevant to the dynamics of a specific target variable.</description></item><item><title>Position: Leverage Foundational Models for Black-Box Optimization</title><link>https://transferlab.ai/pills/2024/position-leverage-foundational-models-for-black-box-optimization/</link><pubDate>Wed, 26 Jun 2024 00:00:00 +0000</pubDate><author>Jan Teusen</author><guid>https://transferlab.ai/pills/2024/position-leverage-foundational-models-for-black-box-optimization/</guid><description>This paper explores the use of Large Language Models (LLMs) to address challenges in Black Box Optimization (BBO), particularly multi-modality and task generalization. The authors propose framing BBO around sequence-based foundation models, leveraging LLMs&amp;amp;rsquo; capabilities to retrieve information from various modalities resulting in superior optimization strategies.</description></item><item><title>Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs</title><link>https://transferlab.ai/seminar/2024/inducing-point-operator-transformer/</link><pubDate>Thu, 20 Jun 2024 10:00:00 +0200</pubDate><author>Seungjun Lee</author><guid>https://transferlab.ai/seminar/2024/inducing-point-operator-transformer/</guid><description>Seungjun Lee will talk about an attention-based neural operator architecture called an Inducing Point Operator Transformer (IPOT), which addresses the challenges of flexibility in handling irregular and arbitrary input and output formats and scalability to large discretizations when solving partial differential equations (PDEs).</description></item><item><title>WECHSEL: Cross-Lingual Transfer</title><link>https://transferlab.ai/pills/2024/wechsel-transfer/</link><pubDate>Mon, 17 Jun 2024 00:00:00 +0000</pubDate><author>Anes Benmerzoug</author><guid>https://transferlab.ai/pills/2024/wechsel-transfer/</guid><description>Language transfer enables the use of language models trained in one or more languages to initialize a new language model in another language. WECHSEL is a cross-lingual language transfer method that efficiently initializes the embedding parameters of a language model in a target language using the embedding parameters from an existing model in a source language, facilitating more efficient training in the new language.</description></item><item><title>Deep neural operators as accurate surrogates for shape optimization</title><link>https://transferlab.ai/pills/2024/deep-neural-operators-as-accurate-surrogates-for-shape-optimization/</link><pubDate>Mon, 10 Jun 2024 00:00:00 +0000</pubDate><author>Samuel Burbulla</author><guid>https://transferlab.ai/pills/2024/deep-neural-operators-as-accurate-surrogates-for-shape-optimization/</guid><description>Deep neural operators, such as DeepONet, have changed the paradigm in high-dimensional nonlinear regression, promising significant generalization and speed-up in computational engineering applications. In a recent paper, the authors investigate the use of DeepONet to infer flow fields around unseen airfoils with the aim of shape constrained optimization, an important design problem in aerodynamics that typically taxes computational resources heavily.</description></item><item><title>Dense Rewards and Continual RL for Task-Oriented Dialogue Policies</title><link>https://transferlab.ai/seminar/2024/dense-rewards-and-continual-rl-for-task-oriented-dialogue-policies/</link><pubDate>Thu, 06 Jun 2024 16:00:00 +0200</pubDate><author>Christian Geishauser</author><guid>https://transferlab.ai/seminar/2024/dense-rewards-and-continual-rl-for-task-oriented-dialogue-policies/</guid><description>Christian will present a proposal for dense rewards in task-oriented dialogue systems to enhance sample efficiency and discuss continual reinforcement learning of dialogue policies. Key topics include an architecture for continual learning, an extended learning environment, lifetime return optimization, and meta-reinforcement learning for hyperparameter adaptation.</description></item><item><title>Interpreting CLIP's Image Representation via Text-based Decomposition</title><link>https://transferlab.ai/pills/2024/clip-representation/</link><pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate><author>Fabio Peruzzo</author><guid>https://transferlab.ai/pills/2024/clip-representation/</guid><description>Interpreting the output of neural networks is often challenging because it entails putting into words patterns that may not be easily expressible in human language. This often results in forced explanations that do not reflect the true decision-making process of the model. However, for CLIP-ViT models there is a natural way to map image features of each component of the Transformer network to text-based concepts.</description></item><item><title>Scientific Inference With Interpretable Machine Learning</title><link>https://transferlab.ai/seminar/2024/scientific-inference-with-interpretable-machine-learning/</link><pubDate>Thu, 23 May 2024 16:00:00 +0200</pubDate><author>Timo Freiesleben</author><guid>https://transferlab.ai/seminar/2024/scientific-inference-with-interpretable-machine-learning/</guid><description>Timo will introduce a framework for designing interpretable machine learning methods for science, termed &amp;amp;ldquo;property descriptors&amp;amp;rdquo;.</description></item><item><title>Amortized Bayesian Decision-Making for Simulation-Based Models</title><link>https://transferlab.ai/pills/2024/amortized-bayesian-decision-making-for-sbi/</link><pubDate>Mon, 06 May 2024 00:00:00 +0000</pubDate><author>Maternus Herold</author><guid>https://transferlab.ai/pills/2024/amortized-bayesian-decision-making-for-sbi/</guid><description>Bayesian inference is a popular tool for parameter estimation. However, the posterior distribution might not be sufficient for decision-making. Bayesian Amortized Decision-Making is a method that learns the cost of data and action pairs to make Bayes-optimal decisions.</description></item><item><title>LAVA: Data Valuation Without Pre-Specified Learning Algorithms</title><link>https://transferlab.ai/seminar/2024/lava-data-valuation-without-pre-specified-learning-algorithms/</link><pubDate>Thu, 02 May 2024 16:00:00 +0200</pubDate><author>Feiyang Kang</author><guid>https://transferlab.ai/seminar/2024/lava-data-valuation-without-pre-specified-learning-algorithms/</guid><description>Today&amp;amp;rsquo;s talk is about LAVA, an Optimal-Transport-based approach to data valuation that dispenses with training of a model to compute values.</description></item><item><title>RAGAR, Your Falsehood RADAR: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models</title><link>https://transferlab.ai/seminar/2024/ragar-your-falsehood-radar/</link><pubDate>Thu, 18 Apr 2024 16:00:00 +0100</pubDate><author>Mohammed Abdul Khaliq</author><guid>https://transferlab.ai/seminar/2024/ragar-your-falsehood-radar/</guid><description>Mohammed Abdul Khaliq, MSc. Computational Linguistics Programm at the Institute for Natural Language Processing of the University of Stuttgart, will give a talk on the topic of his Master&amp;amp;rsquo;s thesis: &amp;amp;ldquo;RAGAR, Your Falsehood RADAR: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models&amp;amp;rdquo;.</description></item><item><title>sbi: the simulation-based inference toolkit</title><link>https://transferlab.ai/software/sbi/</link><pubDate>Mon, 15 Apr 2024 00:00:00 +0000</pubDate><author>Jan Teusen</author><author>Maternus Herold</author><author>Faried Abu Zaid</author><guid>https://transferlab.ai/software/sbi/</guid><description>sbi is a Python package for Bayesian parameter inference on simulators. It implements state-of-the-art algorithms and comes with comprehensive documentation and tutorials, making it suitable for SBI practitioners. Additionally, it offers low-level modularity for researchers who wish to explore more advanced aspects of SBI.</description></item><item><title>Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation</title><link>https://transferlab.ai/pills/2024/sourcerer-maximum-entropy-distribution-estimation/</link><pubDate>Fri, 12 Apr 2024 00:00:00 +0000</pubDate><author>Maternus Herold</author><guid>https://transferlab.ai/pills/2024/sourcerer-maximum-entropy-distribution-estimation/</guid><description>Identifying the source distribution behind observed data is an ill-posed problem. &amp;lt;em&amp;gt;Sourcerer&amp;lt;/em&amp;gt; &amp;lt;span class=&amp;#34;citation&amp;#34;&amp;gt;[&amp;lt;a tabindex=&amp;#34;0&amp;#34; class=&amp;#34;cite-label&amp;#34; data-bs-toggle=&amp;#34;popover&amp;#34; data-bs-trigger=&amp;#34;focus&amp;#34; data-bs-html=&amp;#34;true&amp;#34; data-bs-placement=&amp;#34;top&amp;#34; data-bs-content=&amp;#34;&amp;amp;lt;div class=&amp;amp;#34;citation-block&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;/refs/vetter_sourcerer_2024&amp;amp;#34; class=&amp;amp;#34;citation-link&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-title&amp;amp;#34;&amp;amp;gt;Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation&amp;amp;lt;/span&amp;amp;gt;,
&amp;amp;lt;span class=&amp;amp;#34;citation-authors&amp;amp;#34;&amp;amp;gt;Julius Vetter, Guy Moss, Cornelius Schröder, Richard Gao, Jakob H. Macke.
&amp;amp;lt;/span&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-publication&amp;amp;#34;&amp;amp;gt;arXiv.org&amp;amp;lt;/span&amp;amp;gt;&amp;amp;lt;span class=&amp;amp;#34;citation-date&amp;amp;#34;&amp;amp;gt;(2024)&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;&amp;amp;lt;ul class=&amp;amp;#34;citation-links&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;li class=&amp;amp;#34;badge bg-light rounded-pill py-1&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;https://arxiv.org/abs/2402.07808v1&amp;amp;#34; target=&amp;amp;#34;_blank&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;i class=&amp;amp;#34;icon-file-text&amp;amp;#34; title=&amp;amp;#34;Publication&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;/i&amp;amp;gt;
&amp;amp;lt;span class=&amp;amp;#34;citation-link-text&amp;amp;#34;&amp;amp;gt;Publication&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;
&amp;amp;lt;/li&amp;amp;gt;&amp;amp;lt;li class=&amp;amp;#34;badge bg-light rounded-pill py-1&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;a href=&amp;amp;#34;https://github.com/mackelab/sourcerer&amp;amp;#34;target=&amp;amp;#34;_blank&amp;amp;#34;&amp;amp;gt;
&amp;amp;lt;i class=&amp;amp;#34;icon-code-fork&amp;amp;#34; title=&amp;amp;#34;Code&amp;amp;#34;&amp;amp;gt;&amp;amp;lt;/i&amp;amp;gt;
&amp;amp;lt;span class=&amp;amp;#34;citation-link-text&amp;amp;#34;&amp;amp;gt;Code&amp;amp;lt;/span&amp;amp;gt;
&amp;amp;lt;/a&amp;amp;gt;
&amp;amp;lt;/li&amp;amp;gt;&amp;amp;lt;/ul&amp;amp;gt;&amp;amp;lt;/div&amp;amp;gt;&amp;#34;&amp;gt;Vet24S&amp;lt;/a&amp;gt;]&amp;lt;/span&amp;gt; introduces a novel approach based on maximum entropy to preserve the maximum level of uncertainty in the source distribution, while yielding a unique solution.</description></item><item><title>Second-Order Information and Applications</title><link>https://transferlab.ai/seminar/2024/second-order-information-and-applications/</link><pubDate>Thu, 11 Apr 2024 16:00:00 +0100</pubDate><author>Kristof Schröder</author><guid>https://transferlab.ai/seminar/2024/second-order-information-and-applications/</guid><description>Kristof Schröder will talk about second-order information and its applications in machine learning.</description></item></channel></rss>