Connection between prediction market and stochastic mirror descent

My Ph.D research topic is about machine learning market, primarily trying to apply market mechanism into machine learning to drive efficient and large-scale machine learning tasks, for example, we have used prediction market mechanism to do belief aggregation. We treat each agent in the prediction market as a machine learning model (probabilistic classifier), and these agent with their own wealth maximize the expected utility to decide how much they purchase each good in the market (corresponding to the class each data point belongs to). After their betting, their purchase will be paid out if they achieve the correct outcome. The final market equilibrium price capture the consensus belief across all the participant agents over the outcome space.

One interesting paper by Rafael Frongillo et.al in NIPS 2012 detailed the connection between prediction market and stochastic mirror descent (SMD). The market price update in prediction market is actually a stochastic mirror descent. The gradient of objective function F(x,d)  w.r.t. x is the -d(C,x), where d here is demand function. The Bregman divergence part uses the conjugate dual of cost function C as the regularization function.

  1. From the stochastic online optimisation perspective, the market price x is updated by minimizing  a potential objective function F(x,d), e.g. if the agent bets using Kelly betters, F(x;d) = W * KL(p || x), where W is the wealth of this agent and p is its distribution over the outcome space. We can see that in each update, market tries to match the price x with the agent’s belief distribution under a specific regularization term (i.e. Bregman divergence term). This regularization can prevent the market price to move into agent belief exactly.
  2. The interesting part to me is  the following relationship-d(C,x) = grad of F(x;d).
    The form of demand function determines the form of this potential objective function F: Kelly betters leads to KL divergence, isoelastic utility leads to Renyi divergence.

 

 

Advertisements

About zhanxing

Ph.D student in Machine Learning, University of Edinburgh, UK.
This entry was posted in Machine learning and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s