Summer School on Game Theory and Social Choice
June 21 - June 24 and July 9 - July 13, 2021

Keynote Information


Fair Allocation of Indivisible Goods

Kurt

Kurt Mehlhorn

Max-Planck-Institut

Talk details: Slides Recording

Abstract:

We want to allocate a set of goods to a set of agents; think of splitting a heritage. Each agent has a valuation function over sets of goods. In the simplest case, the agents assign values to the goods and the value of a set of goods is the sum of the values of the goods in the set. We want the allocation to be fair. We introduce important notions of fairness, in particular, envy-freeness (up to any good) and Nash social welfare and discuss the existence of fair allocations and the complexity of finding or approximating one.

The talk is based on joint work (AAAI 22, EC 21, EC 20, JACM 22, Sicomp 21) with Hannaneh Akrami, Bhaskar Ray Chaudhury, Jugal Garg, Martin Hoefer, Telikepalli Kavitha, Ruta Mehta, Pranabendu Misra, Marco Schmalhofer, Alkmini Sgouritsa, Golnoosh Shahkarami, Giovanna Varricchio, Quentin Vermande and Ernest van Wijland.

Bio:

Kurt Mehlhorn is Director Emeritus of the MPI for Informatics and Senior Professor of Computer Science at Saarland University. He headed the algorithms and complexity group at the MPI for Informatics. He co-authored some 300 publications in the field, published six books, and is one of the people behind the LEDA software library. Recently, he produced a series of video lectures "Ideas and Concepts of Computer Science" (in German) for a general audience. He supervised more than 80 students, many of whom have now faculty positions. He has received several prizes (Leibniz Award, EATCS Award, Zuse Medal, ACM Paris Kanellakis Theory and Practice Award, Erasmus Medal of the Academia Europaea) for his work. He holds Honorary Doctorate Degrees from Magdeburg, Waterloo, Aarhus, Gothenburg and Patras universities and is an ACM Fellow. He is a member of the German Academy of Sciences Leopoldina, Academia Europaea, the German Academy of Science and Engineering acatech, the US Academy of Engineering, the Indian Academy of Engineering, and the US Academy of Science. From 2002 to 2008, he was vice president of the Max Planck Society. He is a co-founder of Algorithmic Solutions Software GmbH. He currently serves on the ERC Scientific Council, the Research Advisory Board of Tata Consultancy, and the advisory board of Carnegie Mellon, Qatar.


Fair Division with money and prices

Kurt

Hervé Moulin

University of Glasgow

Talk details: Slides Recording

Abstract:

The common practice of bypassing the indivisibility of objects with cash compensations has received little attention in the fair division literature, with the single exception of the assignment problem. Under general combinatorial utilities we propose a new division rule, dubbed Sell & Buy, where individual messages are cognitively feasible and privacy preserving. It collects a large share of the efficient surplus between completely un-coordinated agents playing safely.

Sell&Buy with two agents starts by the agents bidding for the role of Seller or Buyer: the smallest bid defines the Seller who then posts a price vector such that the value of all the objects is the winning bid; finally the Buyer chooses what to buy from the Seller at that price and the remaining objects, if any, go to the Seller.

The rule offer a strong Guarantees (worst case utility) rewarding (resp. pe-nalising) subadditive (resp. superadditive) utilities relative to the benchmark "fair share" the value of the bundled goods.

Decentralised agents playing safely the S&B rule secure a large share of the efficient surplus and capture most of it in several simple cases: when objects are identical; when utilities are additive; when they are sub-additive.

Under fully general utilities exact efficiency is achieved when for some of the goods one agent's willingness to pay is dominant and the reverse holds true for the remaining goods. Its performance in numerical experiments confirm the theoretical results.

Bio:

Hervé Moulin graduated in 1971 from the Ecole Normale Superieure and received his Ph.D. in mathematics in 1975 at the university of Paris. After teaching at the University Paris- Dauphine, and the Virginia Polytechnic Institute Duke and Rice universities in the US, he is currently the D.J. Robertson Chair in Economics at the University of Glasgow. A Fellow of the Econometric Society since 1983, of the Royal Society of Edinburgh since 2015, and of the British Academy since 2018, he is a past President of the Game Theory Society and of the Social Choice and Welfare Society. He has written five books and over 100 peer-reviewed articles. He is currently the Editor in Chief of Games and Economic Behaviour.


Axiomatization of Meaningful Solutions: Connecting Cooperative Game Theory to Social Network Analysis

shanghua

Shang-Hua Teng

University of Southern California

Talk details: Slides Recording

Abstract:

In data and network analysis, it is desirable that effective solution concepts are both mathematically meaningful and algorithmically scalable. While algorithm analysis and complexity theory have been greatly advanced over the past half-century, characterizing meaningful solutions has been a challenging task in the age of Big Data.

In this talk, I will discuss some recent studies in connecting cooperative game theory to social network analysis. I will use social influence as our illustrative example. Mathematically, social influence is governed by dynamic influence processes over static network structures. Thus, most traditional graph-theoretical concepts, such as PageRank, betweenness, and clusterability, based solely on the static structures may not sufficiently capture social-influence dynamics. We will examine an axiomatic characterization which captures the essence of using the Shapley value as a measure of social-influence centrality. The Bayesian property of Shapley values also led to the following fundamental characterization: the space of complex stochastic network-influence processes has a simple graph-theoretical basis, in that, every stochastic-cascading influence profile can be written as a linear combination of breadth-first-search-based broadcast propagations over layered-graphs. More broadly, axiomatic characterization helps to provide a comprehensive and comparative framework for evaluating different solution concepts for understanding data.

Based on joint work with Wei Chen of Microsoft Research Asia Lab and Hanrui Zhang of Duke/CMU.

Bio:

Shang-Hua Teng is a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics at USC. He is a fellow of SIAM, ACM, and Alfred P. Sloan Foundation, and has twice won the Gödel Prize, first in 2008, for developing smoothed analysis, and then in 2015, for designing the breakthrough scalable Laplacian solver. Citing him as, “one of the most original theoretical computer scientists in the world”, the Simons Foundation named him a 2014 Simons Investigator to pursue long-term curiosity-driven fundamental research. He also received the 2009 Fulkerson Prize, 2021 ACM STOC Test of Time Award (for smoothed analysis), 2022 ACM SIGecom Test of Time Award (for settling the complexity of computing a Nash equilibrium), and 2011 ACM STOC Best Paper Award (for improving maximum-flow minimum-cut algorithms). In addition, he and collaborators developed the first optimal well-shaped Delaunay mesh generation algorithms for arbitrary three-dimensional domains, settled the Rousseeuw-Hubert regression-depth conjecture in robust statistics, and resolved two long-standing complexity-theoretical questions regarding the Sprague-Grundy theorem in combinatorial game theory. For his industry work with Xerox, NASA, Intel, IBM, Akamai, and Microsoft, he received fifteen patents in areas including compiler optimization, Internet technology, and social networks. Dedicated to teaching his daughter to speak Chinese as the sole Chinese-speaking parent in an otherwise English-speaking family and environment, he has also become fascinated with children's bilingual learning.


Invited Speaker Information


Deep Learning for Solving Large Scale Complex Games

Boan

Bo An

Nanyang Technological University

Talk details: Slides Recording

Abstract:

Some important breakthroughs in artificial intelligence in recent years (such as Libratus and security games) can be attributed to the development of large-scale game solving technology in the past decade. However game solving technology cannot handle large scale complex games, and the academic community began to use deep learning techniques to solve such complex games. The talk will discuss important progress in this direction in recent years and future directions.

Bio:

Bo An is a President’s Council Chair Associate Professor at Nanyang Technological University. He received the Ph.D degree from UMASS Amherst. His current research interests include artificial intelligence, multiagent systems, computational game theory, reinforcement learning, and optimization. He has published over 100 referred papers at AAMAS, IJCAI, AAAI, ICAPS, KDD, UAI, EC, WWW, ICLR, NeurIPS, and ICML. Dr. An was the recipient of the 2010 IFAAMAS Victor Lesser Distinguished Dissertation Award, an Operational Excellence Award from the Commander, First Coast Guard District of the United States, the 2012 INFORMS Daniel H. Wagner Prize for Excellence in Operations Research Practice, and 2018 Nanyang Research Award (Young Investigator). His publications won the Best Innovative Application Paper Award at AAMAS’12, the Innovative Application Award at IAAI’16, and the best paper award at DAI’20. He was invited to give Early Career Spotlight talk at IJCAI’17. He led the team HogRider which won the 2017 Microsoft Collaborative AI Challenge. He was named to IEEE Intelligent Systems' "AI's 10 to Watch" list for 2018. He is PC Co-Chair of AAMAS’20 and will be General Co-Chair of AAMAS’23. He is a member of the editorial board of JAIR and is the Associate Editor of AIJ, JAAMAS, IEEE Intelligent Systems, ACM TAAS, and ACM TIST. He was elected to the board of directors of IFAAMAS, senior member of AAAI, and Distinguished member of ACM.


Robust (MD + ML) = Learned Mechanisms

Yang Cai

Yang Cai

Yale University

Talk details: Slides Recording

Abstract:

Machine learning has developed a variety of tools for learning and representing high-dimensional distributions with structure. Recent years have also seen big advances in designing multi-item mechanisms. Akin to overfitting, however, these mechanisms can be extremely sensitive to the Bayesian prior that they are designed for, which becomes problematic when that prior is only approximately known. We present a modular robustification framework for designing multi-dimensional mechanisms using learned or approximate priors. Our framework disentangles the statistical challenge of estimating a multi-dimensional prior from the task of designing a good mechanism for it, robustifying the performance of the latter against the estimation error of the former. As applications of our framework, we show how to combine structured models such as Markov random fields, Bayesian networks, and topic models with mechanism design, exploiting our framework and the expressive power of these models to reduce the effective dimensionality of the mechanism design problem at hand.

(The talk will be based on works with Johannes Brustle and Costis Daskalakis.)

Bio:

Yang Cai is an Associate Professor of Computer Science and Economics (secondary appointment) at Yale University. He finished his PhD at MIT in Computer Science and received his BSc in EECS at Peking University. His research interests lie in theoretical computer science and its interface with economics, probability, learning and statistics. He has been honored with the Sloan Research Fellowship, the NSF CAREER Award, and the William Dawson Scholarship.


Of the People: Voting with Representative Candidates

Yu Cheng

Yu Cheng

University of Illinois at Chicago

Talk details: Slides Recording

Abstract:

We study voting rules when candidates and voters are embedded in a common metric space. We consider the case when candidates are representative of the population, in the sense that they are drawn i.i.d. from the population of the voters, and analyze the expected distortion of different voting rules.

Our first result is that representative candidates improve the social welfare of a population. We restrict attention to the most fundamental social choice setting: two candidates and a majority rule election. It is known that the candidate selected by the majority rule can be thrice as far from the population as the socially optimal one. We study how much this ratio improves when the candidates are representative, and how does the answer depend on the complexity of the underlying metric space.

Our second result is on voting with multiple candidates: we give a clean and tight characterization of positional voting rules that have constant expected distortion. Our characterization result immediately implies constant expected distortion for Borda Count. On the other hand, we obtain super-constant expected distortion for Plurality and Veto. These results contrast with previous results on voting with metric preferences: When the candidates are chosen adversarially, all of the preceding voting rules have distortion linear in the number of candidates or voters.

The talk is based on joint works with Shaddin Dughmi and David Kempe.

Bio:

Yu Cheng is currently an assistant professor in the Department of Mathematics at the University of Illinois at Chicago. He will join the Department of Computer Science at Brown University in Fall 2022. He obtained his Ph.D. in Computer Science from the University of Southern California. He was a postdoc at Duke University and a visiting member at the Institute for Advanced Study. His main research interests include machine learning, optimization, and game theory.

PDF here

Stability of Decentralized Queueing Network ——Beyond Bipartite Cases

Hu Fu

Hu Fu

Shanghai University of Finance and Economics

Talk details: Slides Recording

Abstract:

An important topic in algorithmic game theory is the study of the performance of systems operated by decentralized, self-interested agents, in comparison with systems under centralized control. One such system modeling queueing systems was proposed by Gaitonde and Tardos (EC 20, 21), who studied the stability of the system when there is a centralized authority, when queues use no regret strategies, and when queues strategize ``patiently'', respectively. Unlike games studied in the classical Price of Anarchy literature, in these queueing systems the strategy and outcome of one round impact the utility of future rounds.

In this talk, we generalize Gaitonde and Tardos's model from complete bipartite graphs to general bipartite graphs and DAGs. In general bipartite graphs, we obtain performance analysis comparable to that by Gaitonde and Tardos, although ours reveals a connection to the dual constraints in the centralized stability conditions, and suggest that stability conditions in incomplete bipartite graphs are generally easier to satisfy. For DAGs, we show that a straightforward generalization of the utility function considered by Gaitonde and Tardos may lead to infinite gaps between centralized and decentralized systems when the queues use no regret strategies. We provide both a novel utility for queues and a service priority rule, which we show restore the stability conditions when queues use no regret strategies. Lastly, for incomplete bipartite graphs, we obtain stability conditions when queues strategize patiently, giving a bound slightly worse than that given by Gaitonde and Tarods [EC21] for complete bipartite graphs.

Bio:

Hu Fu is an associate professor at the Institute for Theoretical Computer Science, Shanghai University of Finance and Economics. He obtained his PhD in Computer Science from Cornell University, before working as a postdoc at Micrsoft Research, New England Lab and Caltech, and as an assistant professor at UBC. His research interest is in algorithmic game theory, mechanism design and other broadly related algorithmic problems.


Strategyproof Mechanisms for Multiple Facility Location Problems

Fotakis

Dimitris Fotakis

National Technical University of Athens

Talk details: Slides Recording

Abstract:

We consider k-Facility Location games, where n strategic agents report their locations on the real line, and a mechanism maps them to k ≥ 2 facilities. Each agent seeks to minimize her distance to the nearest facility. We are interested in (deterministic or randomized) strategyproof mechanisms without payments that achieve a reasonable approximation ratio to the optimal social cost of the agents. We start with the main idea behind inapproximability of k-Facility Location by deterministic mechanisms and a natural randomized mechanism with an approximation ratio of n. To circumvent inapproximability by deterministic mechanisms, we consider perturbation stable instances. An instance of k-Facility Location on the line is γ-(perturbation) stable, if the optimal agent clustering is not affected by moving any subset of consecutive agent locations closer to each other by a factor at most γ. We present deterministic and randomized strategyproof mechanisms for stable instances of k-Facility Location that achieve near optimal approximation guarantees. On the negative side, we extend inapproximability of k-Facility Location by deterministic mechanisms to (√2−δ)-stable instances, for any k ≥ 3 and any δ > 0.

Bio:

Dimitris Fotakis (http://www.softlab.ntua.gr/~fotakis/) is an Associate Professor at the School of Electrical and Computer Engineering, National Technical University of Athens. He has been with NTU Athens since Feb. 2009. He graduated from the University of Patras (BEng. 1994, PhD 1999) and previously held a postodoc position with the Max-Planck Institut für Informatik (Postdoc, Sept. 2001 – Sept. 2003), and tenured or tenure track faculty positions with the Aristotle University of Thessaloniki (Dec. 2003 – Oct. 2004) and the University of Aegean (Oct. 2004 – Jan. 2009). He works on algorithmic game theory, with emphasis on algorithmic aspects of congestion games and approximate mechanism design without money, and on the design and analysis of approximation and online algorithms, with emphasis on facility location problems. His research results include asymptotically optimal algorithms for online and incremental facility location, a potential function for generalizations of congestion games with linear delays, and best possible approximate strategyproof mechanisms for multiple facility location games.


Wagering Mechanisms: From Fair Division to Forecaster Selection

Freeman

Rupert Freeman

University of Virginia

Talk details: Slides Recording

Abstract:

Predicting the outcome of future events is a fundamental problem in fields as varied as computer science, finance, political science and others. To this end, the Wisdom of Crowds principle says that an aggregate of many crowdsourced predictions can significantly outperform even expert individuals or models. In this talk, I will focus on the problem of accurately eliciting these predictions using wagering mechanisms, where participants provide both a probabilistic prediction of the event in question, and a monetary wager that they are prepared to stake.

I will discuss recent progress on the design and analysis of wagering mechanisms. In particular, I will focus on some surprising applications of wagering mechanisms. First, they can be used to select a winner in a winner-takes-all forecasting competition, which turns out to be quite different to the standard setting where we can reward everyone. Similar techniques additionally allow us to design incentive compatible, or truthful, algorithms for the fundamental machine learning problem of prediction from expert advice. Existing algorithms for this problem may not induce truthful reports when the experts value the influence they have on the algorithm’s prediction. Finally, I'll show that any wagering mechanism defines a corresponding allocation mechanism for dividing scarce resources among competing agents, a seemingly unrelated problem. This correspondence immediately leads to advances in both areas.

Bio:

Rupert Freeman is an Assistant Professor of Business Administration in the Quantitative Analysis area at the University of Virginia’s Darden School of Business. He received his Ph.D. from Duke University in 2018 and spent two years as a postdoctoral researcher at Microsoft Research New York City. His research focuses on the intersection of artificial intelligence, operations research, and economics, particularly in topics such as resource allocation, voting, and information elicitation. He is the recipient of a Facebook Ph.D. Fellowship and a Duke Computer Science outstanding dissertation award.


Finding fair and efficient allocations through competitive equilibrium

Garg

Jugal Garg

University of Illinois at Urbana-Champaign

Talk details: Slides Recording

Abstract:

Fair division is an age-old problem of allocating a set of items among several agents in a fair and efficient manner. It arises naturally in a wide range of real-life settings, from interpersonal to international conflicts. Competitive equilibrium is a central concept in economics to study markets, and due to its remarkable fairness and efficiency properties, it is also one of the most preferred mechanisms for fair division problems even though there may be no money involved.

My talk will cover some recent exciting computational developments in this area, primarily focusing on the discrete setting where each item can be assigned to exactly one agent.

Bio:

Jugal Garg is an Assistant Professor in Industrial and Enterprise Systems Engineering and an Affiliate Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. Prior to that, he was a postdoctoral fellow at Max-Planck Institute for Informatics, Germany, and Georgia Tech, USA. He received his Ph.D. from IIT-Bombay in 2012. Jugal is broadly interested in computational aspects of economics and game theory, design and analysis of algorithms, and mathematical programming. Currently, he is working on designing fast algorithms for computing competitive equilibria and their applications to fair division problems.
For his research, he has received several awards including the NSF CRII Award, the NSF CAREER Award, the Exemplary Theory Paper Award at ACM EC 2020, INFORMS Koopman Prize 2021, and the Dean's Award for Excellence in Research 2022.


Sample Complexity of Auction Design and Optimal Stopping

Zhiyi Huang

Zhiyi Huang

University of Hong Kong

Talk details: Slides Recording

Abstract:

This talk surveys the research on learning optimal auctions from data, and more recently learning optimal algorithms for problems such as prophet inequality and Pandora’s problem. In particular, we will introduce a theory of generalization for learning problems on product distributions, complementing the existing learning theories in the sense that it does not rely on any complexity measures of the hypothesis classes. The theory provides two general sample complexity bounds: (1) \tilde{O}(nk/eps^2) samples are sufficient and necessary for learning an optimal hypothesis in any problem on an n-dimensional product distribution, whose marginals have finite supports of sizes at most k; (2) \tilde{O}(n/eps^2) samples are sufficient and necessary for any problem on n-dimensional product distributions if it satisfies a notion of strong monotonicity from the algorithmic game theory literature. As applications, we obtain near optimal sample complexity for optimal auctions, prophet inequality, and Pandora’s problem.

Bio:

Zhiyi is an associate professor of Computer Science at the University of Hong Kong. He work broadly on theoretical computer science. Before joining HKU, Zhiyi was a postdoc at Stanford University from 2013 to 2014, working with Tim Roughgarden. He obtained his Ph.D. from the University of Pennsylvania under Sampath Kannan and Aaron Roth in 2013. During grad school, Zhiyi interned at Microsoft Research Redmond under Nikhil R. Devanur in the summers of 2011 and 2012. Before that he got a bachelor degree from the first "Yao Class" under Andrew Yao at Tsinghua University in 2008. Zhiyi was the recipient of the Best Paper Awards of FOCS 2020 and SPAA 2015, an Excellent Young Scientists Fund (HK & Macau) by NSFC, an Early Career Award by RGC Hong Kong, a Morris and Dorothy Rubinoff Dissertation Award, and a Simons Graduate Fellowship in Theoretical Computer Science.


Cake cutting under conflicting constraints

Igarashi

Ayumi Igarashi

National Institute of Informatics (NII), Japan

Talk details: Slides Recording

Abstract:

The cake-cutting problem is a fundamental problem of fairly distributing a divisible resource among interested agents. In this talk, I will introduce the problem of dividing a multi-layered cake under non-overlapping constraints. This problem, recently proposed by Hosseini et al. (2020), captures several natural scenarios such as the allocation of multiple facilities over time where each agent can utilize at most one facility simultaneously, and the allocation of tasks over time where each agent can perform at most one task simultaneously. I will discuss some challenges that arise when imposing non-overlapping constraints on the desirable outcomes, highlighting that the standard existence proof for envy-freeness breaks down in the constrained setting. I will then proceed to discuss how we have resolved this using a general fixed point theorem.

The talk will be based on IJCAI-2020 and WINE-2021 papers.

Bio:

Ayumi Igarashi is an assistant professor in the Principles of Informatics Research Division at the National Institute of Informatics (NII), Japan. Previously, she was a JSPS postdoctoral fellow at the University of Tokyo, Japan, and completed her PhD in the Department of Computer Science at the University of Oxford, UK. Her research focuses on algorithmic game theory and computational social choice. She has co-presented a tutorial on fair division at IJCAI 2019, and was selected on the Innovators Under 35 Japan 2021 list by MIT Technology Review.


Optimal DSIC Auctions for Correlated Private Values: Ex-Post vs. Ex-Interim IR

Ron Lavi

Ron Lavi

Technion-Israel Institute of Technology, University of Bath

Talk details: Slides

Abstract:

We study Dominant-Strategy Incentive-Compatible (DSIC) revenue-maximizing auctions (“optimal auctions”) for a single-item and correlated private values. We give tight bounds on the ratio of the revenue of the optimal Ex-Post Individually Rational (EPIR) auction and the revenue of the optimal Ex-Interim Individually Rational (EIIR) auction. This bound is expressed as a non-decreasing function of the expected social welfare of the underlying distribution. In particular, we show a class of distributions on which this ratio cannot be lower bounded by any positive number. Thus, the restriction to EPIR auctions, which has been the de-facto standard in the computer science literature on auctions with correlated values, may significantly reduce the revenue that can be possibly extracted, as the revenue extracted by an EPIR auction might be an arbitrarily small fraction of the revenue extracted by an EIIR auction.

This is joint work with Ido Feldman

Bio:

Ron Lavi is an Associate Professor at the U. of Bath, UK, and on leave from Technion – Israel Institute of Technology. He has a PhD in computer science from the Hebrew University of Jerusalem, Israel. He previously held post-doc and visiting positions at Caltech, UC Berkeley, Yahoo! Labs, and Microsoft Research. His research is on various aspects of algorithmic game theory, e.g., auction theory, mechanism design, game theoretic aspects of social networks and cryptocurrencies, algorithmic contract theory, and dynamic pricing.


Incentive Mechanisms for Data: The Peer Prediction Approach

Yang Liu

Yang Liu

University of California, Santa Cruz

Talk details: Slides Recording

Abstract:

Eliciting private information from selfish and strategic agents is challenging when the mechanism designer has no verification power. Without a proper incentive, agents can choose to either misreport (e.g., due to privacy concerns) or to report low-quality data. Peer prediction mechanisms are a family of incentive mechanisms that elicit truthful contributions from agents when (1) no ground-truth verification is available and (2) the agents can be financially incentivized. This talk introduces peer prediction mechanisms. I will start by introducing the setup and background of the problem. Then I will proceed to discuss some existing peer prediction mechanisms. I will also highlight the challenges towards making peer prediction practical.

Bio:

Yang Liu is currently an Assistant Professor of Computer Science and Engineering at UC Santa Cruz (2018 - present). He is also an Amazon Visiting Academics. He was previously a postdoctoral fellow at Harvard University (2016 - 2018). He obtained his PhD degree from the Department of EECS, University of Michigan, Ann Arbor in 2015. He is interested in crowdsourcing and algorithmic fairness, both in the context of machine learning. He is a recipient of the NSF CAREER Award. He has been selected to participate in several high-profile projects, including DARPA SCORE, IARPA HFC, and NSF FAI. His work on using machine learning to forecast future security incidents has been successfully commercialized and acquired by FICO. He has won three best paper awards at workshops and conferences.


An Introduction to Cooperative Games

Reshef Meir

Reshef Meir

Technion-Israel Institute of Technology

Talk details: Slides Recording

Abstract:

Cooperative game theory considers the outcome of various cooperation structures among coalitions of agents, under the existence of enforceable agreements. The main questions deal with how profits of cooperation should be allocated in a fair and/or stable way among participants.

In the tutorial we will cover the basic definitions and theoretical results from the field, with a focus on computational questions, and mention some of the ongoing research directions.

Bio:

Reshef Meir is an Associate Professor in the Department of Industrial Engineering at Technion–Israel Institute of Technology in Haifa, Israel. His main research areas are computational game theory, mechanism design, social choice, and bounded rationality. Meir was named one of AI’s Ten to Watch by IEEE Intelligent Systems (2015). In addition, he received the Michael B. Maschler Prize (Game Theory Society), a Rothschild postdoctoral fellowship, the Alon fellowship for young faculty, and an honorable mention for Victor Lesser Distinguished Dissertation Award by IFAAMAS.


Voting Rules for Participatory Budgeting

Dominik Peters

Dominik Peters

Université Paris-Dauphine

Talk details: Slides Recording

Abstract:

Many cities now let their residents vote over how a part of the city budget is spent. In a common setup, a long list of indivisible projects are put up to a vote. An overall budget is decided, and each project comes with a fixed cost. Voters can then vote for a number of projects (usually using variants of approval voting). The social choice question is how to decide the set of winning projects, given the votes. In this tutorial, I will describe different voting rules and their axiomatic properties, as well as the problem of computing optimal outcomes. A particular focus will be on rules that provide proportional representation.

A second setup takes a more continuous (divisible) approach. A budget of 100% needs to be divided between different categories of spending; each category can receive any fraction of the budget. Voters indicate what distribution they would prefer (for example by approval voting, ranking the categories, submitting a utility function, or reporting their ideal budget distribution). Based on this information, we need to decide on an aggregate distribution. For the various input formats, I will describe voting rules and ways to analyze them.

Bio:

Dominik Peters is a CNRS researcher at the lab LAMSADE at Université Paris Dauphine-PSL. He works on computational social choice, and in particular on voting rules, participatory budgeting, impossibility theorems, and fair allocation. He was previously a postdoc with Nisarg Shah and the University of Toronto, with Ariel D. Procaccia at Harvard University and Carnegie Mellon University, and received his PhD from the University of Oxford with Edith Elkind.


Fair and Efficient Online Allocations

Alexandros Psoma

Alexandros Psomas

Purdue University

Talk details: Slides1 Slides2 Recording1 Recording2

Abstract:

A set of resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably, among a set of agents with additive preferences. Our goal is to distribute these resources to maximize fairness and efficiency. We will study this problem under a number of different models that specify how the agents' values for the resources are generated (e.g. stochastic, or adversarial), as well as impose limits on the agents' expressiveness (e.g. the algorithm can only elicit ordinal, not cardinal information).

Results covered in this talk can be found in papers that appeared in EC 2018, EC 2020, AAAI 2021, as well as unpublished recent work.

Bio:

Alexandros Psomas is an Assistant Professor of Computer Science at Purdue University. He is interested in theoretical computer science, working on a broad set of problems at the intersection of Computer Science and Economics. Prior to joining Purdue, he was a visiting researcher at Google research, a Research Fellow at the Simons Institute for the Theory of Computing, and a postdoctoral fellow at Carnegie Mellon University, hosted by Ariel Procaccia. He received his Ph.D. from UC Berkeley in 2017, under the supervision of Christos Papadimitriou.


Simple and Optimal Mechanism in Efficiency and Equality Trade-off

Qi Qi

Qi Qi

Renmin University of China

Talk details: Slides

Abstract:

Many large cities have adopted the policy of restricting the number of new added vehicle licenses to deal with the challenges of traffic and air pollution problems. An important question is then how to allocate these limited new license quotas among large demanders while taking both the social efficiency and equality into consideration. Inspired by this practical problem, we study the problem of designing simple and optimal mechanisms to trade off the efficiency and equality in similar public goods allocation. We first propose a truthful two-group mechanism framework that is general and simple enough, and then attempt to compute the optimal one from it to maximize the social efficiency while guaranteeing certain levels of equality. Interestingly and surprisingly, under some natural or mild conditions about the players’ private values, we show that the Pareto optimal mechanisms among the proposed framework are always the mechanism of first running auction then running lottery (ATL for short). Furthermore, beyond the framework and those conditions, we prove that the ATL mechanism can always guarantee at least a 3/4 optimality of the problem’s objective at every possible case. Moreover, the lower bound of 3/4 could be further improved to near optimal for scarce resources in practice. Plenty experiments are also implemented to verify the ATL mechanism’s optimality, robustness and so on.

Bio:

Dr. Qi is now a tenured Associate Professor of Gaoling School of Artificial Intelligence, Renmin University of China. She obtained her Ph.D. degree from the Department of Management Science and Engineering, Stanford University under guidance of Professor Yinyu Ye in 2012. Dr. Qi worked at Hong Kong University of Science and Technology before she moved to Renmin University of China. Her publications have appeared in Operations Research, Mathematics of Operations Research, Algorithmica and conferences including STOC, CCC, IJCAI, NeurIPS, WINE, etc. Her research interests lie generally in the areas of game theory and operations research, specifically mechanism and auction design, revenue and social welfare optimization, equilibrium computation and their applications in e-commerce, internet advertising, resource allocation and sharing economy.


Designing Optimal Voting Rules

Nisarg Shah

Nisarg Shah

University of Toronto

Talk details: Slides Recording

Abstract:

A central task in voting is to aggregate the ranked preferences of voters over a set of alternatives (candidates) to select a winning alternative. However, despite centuries of research, the natural question of which voting rule is “best” has remained elusive. A recent approach from computer science offers hope. By proposing a natural quantitative measure of the “efficiency” of a voting rule, called distortion, it allows us to define and seek the most efficient voting rule.

In a series of joint works, we resolve several open questions on identifying the most efficient voting rules. Our results naturally extend to taking partial preference rankings as input and/or selecting multiple winning alternatives as output. Stretching this one step further, we also use this framework to design optimal ballot formats, which elicit minimal preference information from voters to make efficient decisions. Finally, we also initiate the study of a quantitative measure of fairness of a voting rule, called proportional fairness, and identify the fairest voting rule for aggregating ranked preferences.

No prior knowledge of voting theory is required and I will point out many important open questions throughout the talk.

Results covered in the talk:

Utilitarian distortion: Preprint, Preprint
(Briefly) Optimal ballot design via utilitarian distortion: NeurIPS'19, EC'20
Metric distortion: FOCS'20, AAAI'22

Bio:

Nisarg Shah is an assistant professor of computer science at the University of Toronto. He has been recognized as AI's 10 to Watch by IEEE Intelligent Systems in 2020. He is also the winner of the 2016 IFAAMAS Victor Lesser Distinguished Dissertation Award and the 2014-2015 Facebook PhD Fellowship. Shah conducts research at the intersection of computer science and economics, addressing issues of fairness, efficiency, elicitation, and incentives that arise when humans are affected by algorithmic decision-making. His recent work develops theoretical foundations for fairness in fields such as voting, resource allocation, and machine learning. He has co-developed two not-for-profit websites, Spliddit.org and RoboVote.org (temporarily unavailable), which have helped more than 200,000 users make provably fair and optimal decisions in their everyday lives. He earned his PhD in computer science at Carnegie Mellon University and was a postdoctoral fellow at Harvard University.

Email: nisarg@cs.toronto.edu
Twitter: @nsrg_shah


On Existence of Truthful Fair Cake Cutting Mechanisms

Biaoshuai Tao

Biaoshuai Tao

Shanghai Jiao Tong University

Talk details: Slides Recording

Abstract:

We study the fair division problem on divisible heterogeneous resources (the cake cutting problem) with strategic agents, where each agent can manipulate his/her private valuation in order to receive a better allocation. A (direct-revelation) mechanism takes agents’ reported valuations as input, and outputs an allocation that satisfies a given fairness requirement. A natural and fundamental open problem, first raised by Chen et al. and subsequently raised by Procaccia, Aziz and Ye, Brânzei and Miltersen, Menon and Larson, Bei et al., etc., is whether there exists a deterministic, truthful and envy-free (or even proportional) cake cutting mechanism. In this paper, we resolve this open problem by proving that there does not exits a deterministic, truthful and proportional cake cutting mechanism, even in the special case where all of the followings hold:

1) there are only two agents;

2) each agent's valuation is a piecewise-constant function;

3) each agent is hungry: each agent has a strictly positive value on any part of the cake.

The impossibility result extends to the case where the mechanism is allowed to leave some part of the cake unallocated.

To circumvent this impossibility result, we aim to design mechanisms that possess certain degree of truthfulness. Motivated by the kind of truthfulness possessed by the classical I-cut-you-choose protocol, we define a weaker notion of truthfulness: the risk-averse truthfulness. We show that the well-known moving-knife procedure and Even-Paz algorithm do not have this truthful property. We propose a mechanism that is risk-averse truthful and envy-free, and a mechanism that is risk-averse truthful and proportional that always outputs allocations with connected pieces.

This paper has been accepted by EC’22.

Bio:

Biaoshuai Tao is an assistant professor at John Hopcroft Center for Computer Science at Shanghai Jiao Tong University since 2020. In 2020, he received his Ph.D. degree in computer science at the University of Michigan, Ann Arbor. His research interests mainly include the interdisciplinary area between theoretical computer science and economics, including social network analyses, resource allocation problems, and algorithmic game theory. Before joining the University of Michigan, Biaoshuai was employed as a project officer at Nanyang Technological University in Singapore from 2012 to 2015, and he received the B.S. degree in mathematical science with a minor in computing from Nanyang Technological University in 2012.


Algorithms for Fair Allocation of Indivisible Items

Xiaowei Wu

Xiaowei Wu

University of Macau

Talk details: Slides Recording

Abstract:

In this talk I will introduce the basic concepts and classic algorithms for the fair allocation problem on indivisible items, which is within the center of computational social choice and algorithmic game theory in recent years. I will review the popular fairness notions for this problem and the recent developments regarding the existence and computation of fair allocations subject to (approximations of) these fairness notions and introduce some open problems.

Bio:

Dr. Xiaowei Wu is an Assistant Professor in the Department of Computer and Information Science with the State Key Laboratory of Internet of Things for Smart City (IOTSC) at University of Macau (UM). He received his Ph.D. degree from the University of Hong Kong (HKU) and his B.Eng. degree from University of Science and Technology of China (USTC). His research interests span various topics in online approximation algorithms, algorithmic game theory and data mining. He has published more than 30 papers in top theory, artificial intelligence and data mining conferences and journals including JACM, SICOMP, TALG, STOC, FOCS, SODA, AAAI, IJCAI, WWW and ICDE.


Simple Algorithms For Strong Fairness/efficiency Guarantees

zick

Yair Zick

University of Massachusetts, Amherst

Talk details: Slides Recording

Abstract:

We present recent results on simple algorithms for the fair allocation of indivisible resources. We advocate for the use of simple algorithmic techniques - i.e. ones that are easy to implement and understand by non-expert stakeholders - in real-world applications.

We focus on two high-impact application domains: assigning course seats to university students, and assigning academic reviewers to papers in large CS conferences.

The mechanisms we propose are rather intuitive, but through either combinatorial preprocessing, or careful analysis, we show that they are able to provide strong fairness guarantees, as well as high social welfare.

Based on joint works with Nawal Benabbou, Mithun Chakraborty, Ayumi Igarashi, Justin Payan and Vignesh Viswanathan

Bio:

Yair Zick is an assistant professor at the College of Information Systems and Computer Sciences, UMass Amherst. Prior to that, he was an assistant professor at the NUS School of Computing. He obtained his PhD (mathematics) from Nanyang Technological University, Singapore in 2014, and a B.Sc (mathematics, "Amirim" honors program) from the Hebrew University of Jerusalem. His research interests include computational fair division, computational social choice, algorithmic game theory and algorithmic transparency. He is the recipient of the 2011 AAMAS Best Student Paper award, the 2014 Victor Lesser IFAAMAS Distinguished Dissertation award, the 2016 ACM EC Best Paper award, the 2017 Singapore NRF Fellowship, and the 2021 IJCAI Early Career Spotlight Award.


Myerson’s Lemma and VCG mechanism in Mechanism Design

Jialin

Jialin Zhang

Chinese Academy of Sciences

Talk details: Slides Note Recording

Abstract:

In this tutorial, we will first introduce the basic idea of mechanism design, and then focus on how to design a truthful mechanism. We introduce two powerful tools. The first one is Myerson's lemma which is suitable in single parameter environment. The second one is VCG mechanism if our goal is to maximize the social welfare.

Bio:

Jialin Zhang is currently a professor in Institute of Computing Technology, Chinese Academy of Science. Prior to ICT, she was a postdoctoral researcher in University of Southern California. She received her PhD in applied mathematics from Tsinghua University. Her research interest includes quantum computing, combinatorial optimization, approximation algorithm and algorithmic game theory.