Navigation auf uzh.ch
Paul Friedrich, Ermis Soumalias, and Behnoosh Zamanlooy will each give a 20-30 min talk on the topic of their Master's theses.
Talk #1: Paul Friedrich: Deep Learning the Kyle model
Abstract: In this talk, I will discuss a new market design research project on adoption matching, where the goal is to match children in need of adoptive homes to families. We study a data-driven matching platform that helps adoption agencies find such matches. The goal of our research project is to analyze how this platform can further be improved. I will first give background information on adoption processes in the US and describe how adoption agencies currently use the matching platform. Further, I will talk about some general insights obtained through interviews with employees of an adoption agency that uses this matching platform intensively. Finally, I will sketch possible modeling approaches for adoption matching and present research questions that we are planning to work on in the future.
Talk #2: Ermis Soumalias: Revenue Maximization in Generalized Deferred-Acceptance Auctions
Abstract: In this thesis we study the problem of revenue maximization for multi-unit deferred-acceptance auctions. Deferred-acceptance auctions have been studied extensively, but mostly for the objective of social welfare, and most studies have followed a worst case analysis approach. In this thesis our aim is to design deferred-acceptance auctions that, given some samples of the players' valuation distributions, achieve expected revenue close to optimal. We focus on two distinct environments, a single-parameter one, multi-unit auctions with bidders with additive valuation functions, and a multi-parameter one, multi-unit auctions with bidders with submodular valuation functions.
Talk #3: Behnoosh Zamanlooy: Architopes: An Architecture Modification for Composite Pattern Learning, Increased Expressiveness, and Reduced Training Time
Abstract: We introduce a simple neural network architecture modification that enables composite pattern learning, increases expressiveness, and reduces training time. In particular, we show that most feed-forward neural network architectures are not capable of approximating composite-patterns. Approximation bounds are obtained in terms of the number of the trainable parameters. Likewise, convergence guarantees are obtained as the imposed restrictions are asymptotically removed. Also, By exploiting the new architecture's structure, a parallelizable training meta-algorithm is then provided, and numerical evaluations are made using the California housing dataset.