Curious to learn more about me? Your audience will be too! That is why I have prepared this speaker media kit with all the information you might need to advertise that I am speaking at your event.

Biography

This biography was written such that it can be easily adapted to the length appropriate for your event. Feel free to use only select paragraphs as you see fit.

Lukas Vermeer is an experienced experimentation practitioner. His specialty is designing and building the infrastructure and processes required to start and scale A/B testing to drive business growth.

Lukas combines industry experience in online experimentation and data science with an academic background in computing science and machine learning. For eight years, he was responsible for A/B testing at Booking.com, widely acknowledged leader in online experimentation. Lukas grew the Booking.com in-house experimentation team from four to thirty people, and became the first Director of Experimentation in the company.

Lukas plays an essential role in making A/B testing an integrated part of business processes. He is an advocate for accessibility of experimentation, helping people from any background (product owners, designers, developers, writers) to make the right decisions. He has accepted two Experimentation Culture Awards; once on behalf of Booking.com and once on behalf of the Vista Experimentation Hub. His impact was recognised in the Harvard Business Review story “Building a Culture of Experimentation” (March - April 2020 issue).

Lukas co-authored multiple influential academic papers on the topic of online experimentation. He spoke at 30+ conferences, including Growth Marketing Summit, CXL Live, and Google Catalyst Conference, SIGIR and KDD.

Currently Lukas is employed as Director of Experimentation at Vistaprint. He is also available as a freelance speaker and consultant to help businesses grow their experimentation culture.

Headshots

A headshot of Lukas Vermeer (2017). A headshot of Lukas Vermeer (2022).
Some rights reserved. Limited use only.

Action shots

Lukas speaking in front of a crowd of thousands of people at Marketing Festival in Ostrava, 2016. Lukas speaking in front of a crowd of thousands of people at Marketing Festival in Ostrava, 2016. Lukas speaking in front of a crowd of thousands of people at Marketing Festival in Ostrava, 2016. Lukas speaking in front of a crowd of thousands of people at Marketing Festival in Ostrava, 2016. Lukas speaking in front of a crowd of thousands of people at Marketing Festival in Ostrava, 2016. Lukas speaking at Digital Growth Unleashed in Londen, 2017. Lukas speaking at Digital Growth Unleashed in Londen, 2017. Lukas speaking while standing in the middle of a seated crowd at a Booking Data Science Meetup in Amsterdam, 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas at Predictive Analytics World, Berlin 2017. Lukas on stage presenting at Growth Marketing Summit in Frankfurt, 2019.
Some rights reserved. Limited use only.

Videos of previous talks

Building A Culture Of Experimentation

Building A Culture Of Experimentation at NIO Summit (2023).

In this talk, you will learn why you should care about business experimentation, and what you can do to foster experimentation inside your organisation. Using practical examples and anecdotes from industry experience, as well as original academic work, we will dive into what makes a culture of experimentation work. We will also highlight some key levers that will help you start fostering your own experimentation culture.

Audience rating 4.82 out of 5.

Moving fast, breaking things, and fixing them as quickly as possible

Moving fast, breaking things, and fixing them as quickly as possible at Conversion Hotel (2021).

Audience rating 4.21 out of 5.

One neat trick to run better experiments

One neat trick to run better experiments at Conversion Hotel (2019).

There are many pitfalls to avoid when running online experiments. In this presentation, Lukas will teach you one neat trick to run better experiments: the sample ratio mismatch check. Using this technique, you will make fewer mistakes, and make better decisions, while running experiments.

Audience rating 4.68 out of 5 (highest rating of all speakers).

Democratising Online Controlled Experiments at Booking.com

Democratising Online Controlled Experiments at Booking.com at Mind the Product (2018).

At Booking.com we have been conducting evidenced based product development using online experiments for more than ten years. Our methods and infrastructure were designed from their inception to reflect Booking.com culture, that is, with democratization and decentralization of experimentation and decision making in mind.

In this talk, we explain how our approach has allowed such a large organization as Booking.com to truly and successfully democratize experimentation.

No audience ratings available from this conference.

Data Science vs. Data Alchemy

Data Science vs. Data Alchemy at GOTO Conference (2016).

The “Big Data” and “Data Science” rhetoric of recent years seems to focus mostly on collecting, storing and analysing existing data. Data which many seem to think they have “too much of” already. However, the greatest discoveries in both science and business rarely come from analysing things that are already there. True innovation starts with asking Big Questions. Only then does it become apparent which data is needed to find the answers we seek.

No audience ratings available from this conference.

Talk synopses

Building A Culture Of Experimentation.

Most suitable for companies that are just getting started on their experimentation journey, and which want to increase broad awareness and understanding of the power of experimentation.

Key takeaways

  • Why companies should care about experimentation. Illustrated using practical examples from industry experience.
  • What is a culture of experimentation. Supported by engaging anecdotes from my own experience as well as academic work
  • How companies can start fostering their own experimentation culture. Using a practical flywheel model described in an academic paper published at the international SEAA conference.

One Neat Trick To Run Better Experiments.

Most suitable for companies that have started running experiments in one or two teams, and which are looking to increase the reliability of their results.

Key takeaways

  • Understanding sample ratio mismatch (SRM). Learn what SRM is, why it matters in experimentation, and how it can impact the reliability and validity of A/B testing results.
  • Detecting and addressing SRM. Gain insights into methods for detecting SRM and the tools available to manage and correct for it in the experimental design process.
  • Practical implications of SRM. Explore real-world scenarios where SRM has affected experimental outcomes and discuss strategies to mitigate such issues to improve the accuracy of experimental conclusions.

Summary

There are many pitfalls to avoid when running online experiments. In this presentation, Lukas will teach you one neat trick to run better experiments: the sample ratio mismatch check. Using this technique, you will make fewer mistakes, and make better decisions, while running experiments.

How To Run Many Tests At Once: Interaction Avoidance & Detection.

Most suitable for companies that have started experimentation in multiple teams already, and which are running into scaling a concurrency problems.

Key takeaways

  • The importance of orthogonality. How to design experiments so they can run simultaneously without affecting each other’s outcomes.
  • Utilising interaction avoidance tools and techniques. How to use advanced tools and simulations to identify and manage interactions between concurrent tests.
  • Applying practical detection models. Applying regression analysis for addressing and visualising interactions between experiments.

Summary

Experiments allow us to test how changes to our products affect the behavior of our users. We make many such changes in many experiments running at the same time. As we scale up our experimentation volume at Vista, there will be an increased risk of interactions between experiments occurring.

We say an interaction has occurred when the effects of two—or more—experiments on a metric combined is different from a linear combination of each of those experiments in isolation. For example, two individual changes help customers find what they need more easily, while the two combined have the opposite effect.

In some cases, this interaction might result from a functional conflict between two changes; they are functionally incompatible, causing a difference in effect. In other cases, the changes might be functionally compatible but still interact to change our measurement or user behavior more subtly.

Experience from other organizations suggests that these kinds of interactions tend to be rare in practice. However, since the consequences for the user experience and our learning can be dire, we should still consider their possibility and ensure we take precautions to avoid or detect them.

In this talk, we will define two kinds of interaction effects and their potential consequences, discuss possible strategies for avoiding these interactions, and explain how we can detect them. We will also share some of the tools and processes we have built at Vista to address this issue.

A/B Testing 101: Statistical Foundations For Causal Inference.

Most suitable for teams or individuals who have started running experiments, do not have a mathematical background, but are looking for an entertaining introduction into the statistical foundations.

Key takeaways

  • Causal inference foundations. Understand the statistical foundations for determining causal relationships in experiments, emphasizing the distinction between correlation and causation.
  • Importance of randomization. Learn how randomization supports causal inference by minimizing bias and providing a fair comparison between treatment effects.
  • Dealing with experimental errors. Explore how to handle potential errors in experimental results, including Type-I and Type-II errors, and the concept of statistical significance in hypothesis testing.

Summary

This presentation on A/B testing statistics elucidates the importance of using proper statistical methods to ascertain causality, not just correlations, in experimental data. It emphasizes the critical role of randomization in achieving reliable results by minimizing selection bias and ensuring that the treatment and control groups are comparable. Additionally, the talk covers common errors in experimentation such as Type I (false positives) and Type II (false negatives) errors, and discusses the significance of statistical power and significance levels in determining the robustness of experimental findings. For a deeper understanding, you can view the full presentation here.

List of past engagements.

This is a selection of past public engagements. Internal company events for clients are not included.

  • NIO Summit, Dallas (September 20th, 2023)
  • Conversion Hotel, Texel (November 20th, 2021)
  • UXRConf Anywhere 2021, Online Event (February 24th, 2021)
  • Nonprofit Innovation & Optimization Summit, Online Event (October 1st, 2020)
  • Experimentation Culture Awards, Online Event (September 24th, 2020)
  • Experimentation Works Panel, Online Event (September 10th, 2020)
  • Conversion Hotel, Texel (November 23rd, 2019)
  • Growth Marketing Summit, Frankfurt (September 3rd, 2019)
  • International Conference on Computational Social Science, Amsterdam (Juli 19th, 2019)
  • CXL Live, Austin (March 28th, 2019)
  • State of Product Management, Amsterdam (November 16th, 2018)
  • BNAIC/BENELEARN, Den Bosch (November 9th, 2018)
  • Digital Elite Camp, Tallinn (June 15th, 2018)
  • Savage Marketing, Amsterdam (June 13th, 2018)
  • Mind the Product Engage, Hamburg (April 20th, 2018)
  • KDD, Halifax (August 13th, 2017)
  • SIGIR, Tokyo (August 7th, 2017)
  • Marketing Festival, Ostrava (October 21st, 2016)
  • GOTO Conference, Amsterdam (June 14th, 2016)
  • Skyscanner Technical Thought Leader Series, London (May 23rd, 2016)
  • Google Catalyst Conference, Dublin (February 12th, 2016)
  • Leadership Challenges with Big Data Program, Erasmus University (November 24th, 2015)
  • Challenges in Personalised Communication Symposium, Radboud University (October 2nd, 2015)
  • ProductTank AMS Meetup (June 22nd, 2015)
  • Berlin Machine Learning Group Meetup (May 4th, 2015)
  • ConversionXL Live Conference, Austin (March 12th, 2015)
  • Talk@Google: Always Be Testing (November 18th, 2014)
  • Conversion Conference London (October 30th, 2014)
  • Predictive Analytics World London (October 30th, 2014)
  • Emerce Conversion Conference, Amsterdam (April 10th, 2014)
  • Webanalytics Congres, Utrecht (April 3rd, 2014)
  • Project Walhalla Conference, Nijmegen (April 18th, 2014)
  • Predictive Analytics World London (Oct 23rd, 2013)
  • RecSys:NL Meetup, Amsterdam (August 19th, 2013)

← Back to Keynote Speaker