© 2019 By Andy Brownback.

I'm an assistant professor of economics at the University of Arkansas specializing in behavioral and experimental economics. I use laboratory, field, and online experiments to study questions about education, beliefs, and identity.
I'm originially from Kansas and received my bachelor's degree from Kansas State University. Before joining the faculty at the University of Arkansas, I studied economics at UC San Diego. ​
 

( 01 )

Grants, Awards & Fellowships

2018-2021

PEDL Major Research Grant
(Co-PI with Sarojini Hirshleifer and Arman Rezaee)

Employer beliefs, employee training, and labor market outcomes: A field experiment in Uganda ($452,000)

2018-2019

J-PAL Post-Primary Education Initiative
(Co-PI with Sarojini Hirshleifer and Arman Rezaee)

Employer beliefs, employee training, and labor market outcomes: A field experiment in Uganda ($16,660)

2017-2018

Robert Wood Johnson Foundation
(Co-PI with Alex Imas and Michael Kuhn)

Examining the impact of waiting periods on improving the use of food subsidies for healthier consumption while maintaining choice ($198,940)

2016-2018

Laura and John Arnold Foundation
(Co-PI with Sally Sadoff)

Improving Community College Outcomes through Performance Incentives ($312,000)

2016

Yankelovich Foundation
(Co-PI with Sally Sadoff)

Improving Community College Outcomes through Performance Incentives ($25,000)

2015-2016

Russell Sage Foundation Small Grant in Behavioral Economics
(Co-PI with Tristan Gagnon-Bartsch and Shengwu Li)

On the Elicitation of Willingness to Pay for Stigmatized Goods ($4,700)

 

( 02 )

Education

2010-2015

University of California, San Diego

PhD in Economics (2015)

Advisor: James Andreoni

2006-2010

Kansas State University

BA in Economics and Mathematics (2010)

Minors in Spanish and Statistics

 

( 03 )

Teaching

2015-Present

University of Arkansas

Industrial Organization I: Graduate course in applied microeconomic theory and industrial organization.

Economics of Organizations: Undergraduate course in managerial economics, game theory, and industrial organization.

2010-2015

University of California, San Diego

Teaching assistant for Game Theory, MBA Strategy, and Intermediate Microeconomics​.

 

( 04 )

Research
Improving College Instruction through Incentives (with Sally Sadoff)

Journal of Political Economy (Forthcoming) [Link]

Prior work demonstrates the importance of college instructor quality, but little is known about whether college instruction can be improved. In a field experiment, we examine the impact of performance-based incentives for community college instructors. We estimate that instructor incentives improve student exam scores by 0.16-0.2 standard deviations (SD), increase course grades by 0.1 SD, reduce course dropout by 17 percent, and increase credit accumulation by 18 percent. The effects are largest among part-time adjunct instructors. During the program, instructor incentives have large positive spillovers to students' unincentivized courses, significantly increasing completion rates and grades in courses outside our study. One year after the program ends, instructor incentives increase transfer rates to four-year colleges by an estimated 22-28 percent, with no impact on two-year college degrees. To test for potential complementarities, we examine the impact of instructor incentives in conjunction with student incentives and find no evidence that the incentives are more effective in combination. Finally, we elicit contract preferences for the loss-framed incentives we offer. At baseline, instructors prefer gain-framed incentives. However, after experiencing loss-framed incentives, instructors significantly increase their preferences for them.

[PDF]

Understanding Outcome Bias (with Michael Kuhn)

Games and Economic Behavior (2019) [Link]

Disentangling effort and luck is critical when judging performance. In a principal-agent experiment, we demonstrate that principals' judgments of agent effort are biased by luck, despite perfectly observing the agent's effort. We find that two potential solutions to this "outcome bias"—the opportunity to avoid irrelevant information about luck, and outsourcing judgment to independent third parties—are ineffective.  When we give control over information about luck to principals and agents in separate treatments, we find asymmetric sophistication: agents strategically manipulate principals' outcome bias, but principals fail to recognize their own bias. Independent third parties are just as biased as principals. These findings indicate that the scope of outcome bias may be larger than previously understood and that outcome bias cannot be driven solely by emotional responses nor distributional preferences. Instead, we hypothesize that luck directly affects beliefs, and we test this hypothesis by eliciting the beliefs of third parties and principals. Lucky agents are believed to exert more effort than identical, unlucky agents. We propose a model of biased belief updating explaining these results.

Social Desirability Bias and Polling Errors in the 2016 Presidential Election (with Aaron Novotny)

Journal of Behavioral and Experimental Economics (2018) [Link]

Social scientists have observed that socially desirable responding (SDR) often biases unincentivized surveys. Nonetheless, media, campaigns, and markets all employ unincentivized polls to make predictions about electoral outcomes. During the 2016 presidential campaign, we conducted three list experiments to test the effect SDR has on polls of agreement with presidential candidates. We elicit a subject's agreement with either Hillary Clinton or Donald Trump using explicit questioning or an implicit elicitation that allows subjects to conceal their individual responses. We find evidence that explicit polling overstates agreement with Clinton relative to Trump. Subgroup analysis by party identification shows that SDR significantly diminishes explicit statements of agreement with the opposing party's candidate, driven largely by Democrats who are significantly less likely to explicitly state agreement with Trump. We measure economic policy preferences and find no evidence that ideological agreement drives SDR. We find suggestive evidence that local voting patterns predict SDR.

[PDF]

 

Press: [Press Release] [KUAF (NPR)] [KNWA 24] [KARK 4]

          [The British Psychological Society]

A Classroom Experiment on Effort Allocation under Relative Grading

Economics of Education Review (2018) [Link]

Grading on the curve is a form of relative evaluation similar to an all-pay auction or rank-order tournament. The distribution of students drawn into the class from the population is predictably linked to the size of the class. Increasing the class size draws students' percentile ranks closer to their population percentiles. Since grades are awarded based on percentile ranks in the class, this reallocates incentives for effort between students with different abilities. The predicted aggregate effort and the predicted effort from high-ability students increases while the predicted effort from low-ability students decreases. Andreoni and Brownback (2017) find that the size of a contest has a causal impact on the aggregate effort from participants and the distribution of effort among heterogeneous agents. In this paper, I randomly assign "class sizes" to quizzes in an economics course to test these predictions in a real-stakes environment. My within-subjects design controls for student, classroom, and time confounds and finds that the lower variance of larger classes elicits greater effort from all but the lowest-ability students, significantly increasing aggregate effort. 

[PDF]

All-Pay Auctions and Group Size: Grading on the Curve and Other Applications (with James Andreoni)

Journal of Economic Behavior and Organization (2017) [Link]

We model contests with a fixed proportion of prizes, such as a grading curve, as all-pay auctions where higher effort weakly increases the likelihood of a prize. We find theoretical predictions for the heterogeneous effect auction size has on effort from high- and low-types. We test our predictions in a laboratory experiment that compares behavior in two-bidder, one-prize auctions with behavior in 20-bidder, 10-prize auctions. We find a statistically significant 11.8% increase in aggregate bidding when moving from the small to large auction. The impact is heterogeneous: as the auction size increases, low-types decrease effort but high-types increase effort. Additionally, the larger auction provides a stronger rank-correlation between effort and ability, awarding more prizes to the higher-skilled and improving the efficiency of prize allocation.

[NBER link]

Works In Progress

Behavioral Food Subsidies (with Alex Imas & Michael Kuhn)

Under review

We examine the potential of healthy food subsidies for reducing nutritional inequality through demand-side interventions. Using a pre-registered field experiment with low-income grocery shoppers, we show that low-cost, scalable behavioral interventions make subsidies substantially more effective. Our unique design allows us to elicit choices and deliver subsidies both before and during a shopping trip. We examine two novel interventions: giving shoppers greater agency through a choice between subsidies and introducing waiting periods designed to prompt deliberation about food purchases. The interventions increase healthy purchases by 61% relative to choiceless healthy subsidies, and 199% relative to a control group.

[SSRN Link]

The Impact of Summer School on Community College Student Success (with Sally Sadoff)

In Progress

We conduct a randomized field experiment to uncover the causal effect of summer enrollment on credit accumulation, student retention, transfer rates, and graduation among community college students. We randomly assign summer enrollment vouchers to students during the spring semester. These vouchers increase summer credit enrollment by 0.59 and increase enrollment rates by 20 percentage points. One-year after the program, summer enrollment significantly increases rates of associate degree attainment and rates of transfer to 4-year colleges. These effects diminish and become statistically insignificant two years after the program. For every additional credit attempted, students accumulate 0.82 completed credits. Summer enrollment has a positive but not significant impact on credits accumulated after the summer semester. Incentivized measures of preferences for summer enrollment are strong predictors of increased summer enrollment and rates of graduation or transfer. However, we find that the impact of summer school has no larger effect on students with strong preferences for summer. If anything, we find that preferences cause students to negatively select into summer school based on its predicted impact.

Increasing Access to Training, Capital, and Networks: Two Planned Field Experiments with Small Firms in Uganda (with Sarojini Hirshleifer, Arman Rezaee, & Benjamin Kachero)

In Progress

On the Elicitation of Willingness to Pay for Stigmatized Goods
(with Tristan Gagnon-Bartsch & Shengwu Li)

In Progress

This site was designed with the
.com
website builder. Create your website today.
Start Now