This project will develop techniques to iteratively design mechanisms for the allocation of scarce resources in a repeated setting. The focus will be ensuring multi-round incentive compatibility.
- Machine Learning
- data science
- mechanism design
Increasingly, meaningful economic interactions happen through electronic channels where the interaction is recorded and that data is either sold to a third party or later used to tailor the system to achieve the platform owners objective. However, it has become increasingly apparent that users of these systems have not been fully informed about the ways in which the data they generate will be used, nor are the users happy with the tradeoff they have made. This has led to both platform disengagement and software tools to obfuscate user data (such as browser plugins that visit web pages in the background in order to introduce noise).
In this project, we are interested in developing techniques in the field of mechanism design, the study of designing systems for allocating scarce resources to rational economics agents, that provide strong algorithmic guarantees to ensure that users find it optimal to behave truthfully (i.e. not obfuscate the generated data) and believe that the use of the data is fair (i.e. it used ethically by the mechanism designer).
This will involve combining techniques and ideas from algorithmic mechanism design, machine learning, and economics to generate algorithms that possess three properties: First, the participant in the mechanism, and the one generating the data, must find it in their best interest to generate truthful data. Second, the mechanism must learn and adapt to the generated data in order to improve the outcomes of the mechanism. Third, the use of the data must satisfy algorithmic notions of fairness. The three characteristics listed above are necessary for a fully functioning system, because if points one or three do not hold, users may generate bad data that leads to suboptimal long term performance of the mechanism. If point two doesn't hold, the quality of the data is irrelevant.
We expect these techniques to be particularly applicable and powerful in settings with rapidly repeated interactions, such as online ad auctions for cloud resources or ad space. In many of these repeated settings, if the mechanism can be adapted to the setting, much better (in the sense of either revenue maximization or social welfare maximization) can be achieved. With a mechanism design strategy that guarantees that the user finds it optimal to generate known good information, we can use the earlier rounds to optimize for later rounds.
In terms of scholarly output, we intend to publish papers in top conference proceedings and to submit at least one article to a top journal of interest to researchers in business schools. Additionally, we intend to submit an NSF grant to the Robust Intelligence funding program.
The funds will be used to fund PhD students in both computer science and systems engineering (in SEAS) and economics. The students will work to develop and test algorithms that provide provable guarantees that the multi-round mechanism is guaranteed to generate truthful data while learning from the data.