Q&A with Christian Kroer, Columbia professor and Facebook Fellowship alum

In this monthly interview series, we turn the spotlight on members of the academic community and the important research they do — as thought partners, collaborators, and independent contributors. For July, we nominated Christian Kroer, a professor at Columbia University. Kroer’s relationship with Facebook started in 2016, when he won the Facebook Fellowship in economics […]

In this monthly interview series, we turn the spotlight on members of the academic community and the important research they do — as thought partners, collaborators, and independent contributors.

For July, we nominated Christian Kroer, a professor at Columbia University. Kroer’s relationship with Facebook started in 2016, when he won the Facebook Fellowship in economics and computation. During this time, Kroer became an intern on the Facebook Core Data Science (CDS) team, which eventually led to a position as a postdoctoral researcher. During his time working with CDS, Kroer coauthored a number of publications at top research conferences and journals.

In this Q&A, Kroer shares more about his background, his current research, and his experience working with Facebook.

Q: Tell us about your role at Columbia and the type of research you specialize in.

Christian Kroer: I’m an assistant professor in the Department of Industrial Engineering and Operations Research at Columbia. There, I do research that generally draws on ideas from optimization, AI, game theory, and market design. Two of my most active research areas are 1) how to compute game-theoretic equilibria at scale, typically using a mixture of optimization and AI techniques, and 2) how to use optimization and AI to understand and enable large-scale market design. Here, market should be understood in a broad sense and can mean, for example, Facebook’s advertising auctions market, but also things that would typically not be thought of as markets, such as fair large-scale allocation of resources to users or content producers.

Generally, I am drawn to research that allows me to do a mixture of theoretical and applied work. Most of my papers have a fair amount of math — I especially like convex optimization and online learning — but I also typically want to implement the algorithms that I develop with coauthors, in order to see how they perform on realistic data. Sometimes the discrepancy between the performance prediction offered by the theory versus the practical performance can be very interesting! Most of the research that I do at Columbia is in collaboration with the awesome PhD students that we have in our department. I’ve really enjoyed being a professor so far, and that’s in large part due to them.

Beyond research, I also teach courses. My teaching is mostly focused on AI and machine learning, with an eye toward how those topics combine with more classical operations research topics.

Q: What have you been working on lately?

CK: I have a number of active research areas that I work on. I’ll try to give a quick summary of the major ones. More context can be found on my website.

With my student Yuan Gao, I have been working on the problem of computing what’s called a market equilibrium. The idea in a market equilibrium is that you have a bunch of goods, as well as a number of buyers who are interested in those goods, and each buyer is endowed with some budget of currency. The goal is to find a set of prices and a corresponding allocation of the goods to the buyers such that every buyer purchases an optimal bundle of goods given the prices. An interesting note is that the budgets and prices need not be in terms of real money; they can also be fake money that we give to each buyer. For example, this idea has been used to fairly allocate course seats to students: Give each student a budget of fake money for buying course seats, ask them how much they would value various proposed schedules, and find a market equilibrium under this fake economy that you created. Our work in this space has been especially motivated by internet-scale applications, such as ad auctions and fair content recommendation, where scalability of the algorithms is of paramount importance.

With my student Rachitesh Kumar, I have been looking at using tools from theoretical computer science and economics to analyze the behavior of bidders in internet advertising auctions. Together with his coadviser Santiago Balseiro, we have been studying large auction systems in the context of first-price auctions, where we study the existence of game-theoretic equilibria and the structure of optimal buyer behavior. This is a very pertinent topic because several large ad exchanges have recently moved to this auction format. Along with Xi Chen from Columbia University, we have been using tools from computational complexity theory to show that it is most likely hard (in a strong formal sense) to compute equilibrium behavior for buyers in a system where second-price auctions are used. This is interesting in light of the switch from second-price to first-price auctions mentioned above because our work shows that there is a strong theoretical sense in which first-price auctions are easier to deal with than second-price auctions.

I also continue to work on the topic that I spent most of my PhD working on: computing Nash equilibrium and other solution concepts in large-scale sequential games with imperfect information. To give an idea of what I mean here, the canonical example of such a game is poker. Informally speaking, one can show that a Nash equilibrium for a two-player poker game is the optimal way to play, given no assumptions about the opposing player. The catch is that this game is extremely large, and so one needs to accept some approximation error when attempting to compute these equilibria. Together with Gabriele Farina and Tuomas Sandholm, I have been working on some new ideas on how to achieve fast theoretical and practical rates of convergence when using gradient-descent-like algorithms for solving these games.

Q: When and how did you first become involved with Facebook?

CK: My relationship with Facebook goes back a long time. I first got involved with Facebook back in my PhD days, when I received the Facebook Fellowship in Economics and Computation in early 2016 (or maybe late 2015). That same year, I also spent the summer as an intern on the CDS team, working with Eric Sodomka and Nico Stier. That internship actually had a huge impact on my long-term research interests. A lot of the market-oriented work that I described earlier is inspired by the time that I spent working with them.

Q: How did your relationship with Facebook evolve from there?

CK: After my internship with CDS, I kept working with the group on and off throughout the remainder of my PhD. Then, after finishing my PhD, I spent a year as a postdoctoral fellow in the group as well. Since then, I have continued to collaborate with various Facebook researchers on projects related to large-scale markets. For example, we have been working on how optimization and AI methods can be used to improve recommendations for people interested in blood donation opportunities.

Q: Where can people learn more about your research?

CK: On my research website, I post links to all my papers. I also wrote a set of lecture notes for a PhD course that I taught in my department; those cover several of the general areas that I work in. I also occasionally tweet about research topics @chrkroer.

The post Q&A with Christian Kroer, Columbia professor and Facebook Fellowship alum appeared first on Facebook Research.

Source: Facebook AI Research