Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents

A number of learning models have been suggested to analyze the repeated interaction of boundedly rational agents competing in oligopolistic markets. The agents form a model of the environment that they are competing in, which includes the market demand and price formation process, as well as their e...

Full description

Saved in:
Bibliographic Details
Main Author: Julide Yazar
Format: Article
Language:English
Published: MDPI AG 2024-11-01
Series:Games
Subjects:
Online Access:https://www.mdpi.com/2073-4336/15/6/40
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850241307208843264
author Julide Yazar
author_facet Julide Yazar
author_sort Julide Yazar
collection DOAJ
description A number of learning models have been suggested to analyze the repeated interaction of boundedly rational agents competing in oligopolistic markets. The agents form a model of the environment that they are competing in, which includes the market demand and price formation process, as well as their expectations of their rivals’ actions. The agents update their model based on the observed output and price realizations and then choose their next period output levels according to an optimization criterion. In previous works, the global dynamics of price movement have been analyzed when risk-neutral agents maximize their expected rewards at each round. However, in many practical settings, agents may be concerned with the risk or uncertainty in their reward stream, in addition to the expected value of the future rewards. Learning in oligopoly models for the case of risk-averse agents has received much less attention. In this paper, we present a novel learning model that extends fictitious play learning to continuous strategy spaces where agents combine their prior beliefs with market price realizations in previous periods to learn the mean and the variance of the aggregate supply function of the rival firms in a Bayesian framework. Next, each firm maximizes a linear combination of the expected value of the profit and a penalty term for the variance of the returns. Specifically, each agent assumes that the aggregate supply of the remaining agents is sampled from a parametric distribution employing a normal-inverse gamma prior. We prove the convergence of the proposed dynamics and present simulation results to compare the proposed learning rule to the traditional best response dynamics.
format Article
id doaj-art-d7c6151d00d040d394bc94145d066c91
institution OA Journals
issn 2073-4336
language English
publishDate 2024-11-01
publisher MDPI AG
record_format Article
series Games
spelling doaj-art-d7c6151d00d040d394bc94145d066c912025-08-20T02:00:38ZengMDPI AGGames2073-43362024-11-011564010.3390/g15060040Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse AgentsJulide Yazar0Department of Economics, Ohio Wesleyan University, Delaware, OH 43015, USAA number of learning models have been suggested to analyze the repeated interaction of boundedly rational agents competing in oligopolistic markets. The agents form a model of the environment that they are competing in, which includes the market demand and price formation process, as well as their expectations of their rivals’ actions. The agents update their model based on the observed output and price realizations and then choose their next period output levels according to an optimization criterion. In previous works, the global dynamics of price movement have been analyzed when risk-neutral agents maximize their expected rewards at each round. However, in many practical settings, agents may be concerned with the risk or uncertainty in their reward stream, in addition to the expected value of the future rewards. Learning in oligopoly models for the case of risk-averse agents has received much less attention. In this paper, we present a novel learning model that extends fictitious play learning to continuous strategy spaces where agents combine their prior beliefs with market price realizations in previous periods to learn the mean and the variance of the aggregate supply function of the rival firms in a Bayesian framework. Next, each firm maximizes a linear combination of the expected value of the profit and a penalty term for the variance of the returns. Specifically, each agent assumes that the aggregate supply of the remaining agents is sampled from a parametric distribution employing a normal-inverse gamma prior. We prove the convergence of the proposed dynamics and present simulation results to compare the proposed learning rule to the traditional best response dynamics.https://www.mdpi.com/2073-4336/15/6/40Bayesian learningCournot modelrisk-averse learningfictitious playnormal-inverse gamma prior
spellingShingle Julide Yazar
Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents
Games
Bayesian learning
Cournot model
risk-averse learning
fictitious play
normal-inverse gamma prior
title Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents
title_full Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents
title_fullStr Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents
title_full_unstemmed Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents
title_short Bayesian Fictitious Play in Oligopoly: The Case of Risk-Averse Agents
title_sort bayesian fictitious play in oligopoly the case of risk averse agents
topic Bayesian learning
Cournot model
risk-averse learning
fictitious play
normal-inverse gamma prior
url https://www.mdpi.com/2073-4336/15/6/40
work_keys_str_mv AT julideyazar bayesianfictitiousplayinoligopolythecaseofriskaverseagents