Evaluating the Robustness of Off-Policy Evaluation

         
Author Name SAITO Yuta (Cornell University) / UDAGAWA Takuma (Sony Group Corporation) / KIYOHARA Haruka (Tokyo Institute of Technology) / MOGI Kazuki (Stanford University) / NARITA Yusuke (Visiting Fellow, RIETI) / TATENO Kei (Sony Group Corporation)
Creation Date/NO. June 2023 23-E-041
Download / Links

Abstract

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive settings such as precision medicine and recommender systems. Since many OPE estimators have been proposed and some of them have hyperparameters that need to be tuned, there is an emerging challenge for practitioners to select and tune OPE estimators for their specific application. Unfortunately, identifying a reliable estimator from results reported in research papers is often difficult because the current experimental procedure evaluates and compares the estimators’ performance on a narrow set of hyperparameters and evaluation policies. Therefore, it is difficult to know which estimator is safe and reliable to use. In this work, we develop Interpretable Evaluation for Offline Evaluation (IEOE), an experimental procedure to evaluate OPE estimators’ robustness to changes in hyperparameters and/or evaluation policies in an interpretable manner. Then, using the IEOE procedure, we perform extensive evaluation of a wide variety of existing estimators on the Open Bandit Dataset, a large-scale public real-world dataset for OPE. We demonstrate that our procedure can evaluate the estimators’ robustness to the hyperparameter choice, helping us avoid using unsafe estimators. Finally, we apply IEOE to real-world e-commerce platform data and demonstrate how to use our protocol in practice.