Graduate Thesis Or Dissertation
 

Reinforcement learning-based off-equilibrium incentives to approximate the VCG mechanism

Öffentlich Deposited

Herunterladbarer Inhalt

PDF Herunterladen
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/q237hw19t

Descriptions

Attribute NameValues
Creator
Abstract
  • Auctions are used to solve resource allocation problem between many agents and many items in real-world settings. Unfortunately, in most cases, it is possible for selfish agents to manipulate the system for their own interest at the expense of the social welfare. Such manipulation can be prevented using the Vickrey-Clarke-Groves mechanism, which guarantees complete truthfulness from the agents, and therefore, preserve optimal social welfare. However, the Vickrey-Clarke-Groves mechanism is computationally expensive, mainly due to the search for the optimal allocation of items (the “Winner Determination Problem”). In this work, we propose the use of off-equilibrium incentives to approximate the VCG mechanism, where the agents use reinforcement learning using “difference rewards” to compute those incentives. In one round of the reinforcement learning the agents: (i) declare their preferences in terms of allocation; (ii) compute their reward using the difference reward; (iii) and update their Q-table and move toward system efficiency. We demonstrate theoretically the equivalence of the off-equilibrium incentives and the VCG mechanism, and empirically show that this approximation of VCG mechanism leads to desirable outcomes in a congestion game.
License
Resource Type
Date Available
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Non-Academic Affiliation
Subject
Urheberrechts-Erklärung
Publisher
Peer Reviewed
Language
Replaces

Beziehungen

Parents:

This work has no parents.

In Collection:

Artikel