An Overview of Contextual Bandits towardsdatascience.com Post date February 2, 2024 No Comments on An Overview of Contextual Bandits External Tags a-b-testing, contextual-bandit, editors-pick, experimentation, multi-armed-bandit
Dynamic Pricing with Multi-Armed Bandit: Learning by Doing! towardsdatascience.com Post date August 16, 2023 No Comments on Dynamic Pricing with Multi-Armed Bandit: Learning by Doing! External Tags artificial-intelligence, dynamic-pricing, editors-pick, multi-armed-bandit, reinforcement-learning
Beyond the Basics: Reinforcement Learning with Jax — Part II: Developing an exploitative… towardsdatascience.com Post date June 2, 2023 No Comments on Beyond the Basics: Reinforcement Learning with Jax — Part II: Developing an exploitative… External Tags deep-dives, editors-pick, jax, multi-armed-bandit, reinforcement-learning