|Author Name||NARITA Yusuke (Visiting Fellow, RIETI) / OKUMURA Kyohei (Northwestern University) / SHIMIZU Akihiro (Mercari) / YATA Kohei (University of Wisconsin-Madison)|
|Creation Date/NO.||October 2022 22-E-097|
|Download / Links|
Off-policy evaluation (OPE) attempts to predict the performance of counterfactual policies using log data from a different policy. We extend its applicability by developing an OPE method for a class of both full support and deficient support logging policies in contextual-bandit settings. This class includes deterministic bandit (such as Upper Confidence Bound) as well as deterministic decision-making based on supervised and unsupervised learning. We prove that our method's prediction converges in probability to the true performance of a counterfactual policy as the sample size increases. We validate our method with experiments on partly and entirely deterministic logging policies. Finally, we apply it to evaluate coupon targeting policies by a major online platform and show how to improve the existing policy.