forked from paulgp/paulgp.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathjudge_setup.tex
43 lines (28 loc) · 1.91 KB
/
judge_setup.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
\input{../Papers/header.tex}
\begin{document}
Consider a sample of $N$ individuals, with $J$ judges. Each judge is assigned $n_{j}$ individuals, with $\sum_{j}n_{j} = N$.
Define the first-stage outcome (dismissal) as $X_{i}$ for person
$i$. Let $\text{Judge}_{i}$ denote the judge that is assigned to
person $i$.
Then, let $1(\text{Judge}_{i} = j)$ be an indicator for person $i$'s
judge equal to $j$. Hence, mechanically,
$\sum_{j=1}^{J}1(\text{Judge}_{i} = j) = 1$.
Consider that the judges are randomly assinged. We want to use this in a first-stage regression of the following:
\begin{equation}
X_{i} = \pi_{0} + \sum_{j=1}^{J-1}\pi_{j}1(\text{Judge}_{i} = j)
\end{equation}
Note that mechanically, $\pi_{0}$ will be EXACTLY equal
$N^{-1}\sum_{i}X_{i}1(\text{Judge}_{i} = J)$, or the average
(non-leave-one-out) mean leniency. Similarly, $pi_{j}$ is just
$N^{-1}\sum_{i}X_{i}1(\text{Judge}_{i} = j) -
N^{-1}\sum_{i}X_{i}1(\text{Judge}_{i} = J)$, or the relative leniency.
This is $J-1$ instruments!
Our predicted values for $X_{i}$ is just $\bar{X}(j)$, the simple average for a given judge $j$ who was assigned to $i$.
Now consider that we are overidentified. If we implelemented jackknife IV (Imbens Angrist Krueger), our predicted values would be our predicted values for $X_{i}$ still uses the SAME approach, but now ``leave-outs'' our own observation.
Finally, in Dobbie Goldsmith-PInkham and Yang + other leniency appraoches, they diference out their ``own-location'', because the first stage result controls for fixed effects. E.g., consider $\alpha_{l}$, a location fixed effect that is necessary for identification. Consider the following first stage:
\begin{equation}
X_{i} = \pi_{0} + \sum_{j=1}^{J-1}\pi_{j}1(\text{Judge}_{i} = j) + \alpha_{l}
\end{equation}
Now, consider the residual regression -- projecting onto $\alpha_{l}$
will subtract the own location averages (across all judges within that location).
\end{document}