Introduction

For this report, we looked at the Greene and Shaffer data on Canadian refugee-appeals from the textbook website. This project served as an introduction to the R programming language for our team. The purpose of the activity was to give an elementary analysis of the data and answer a couple of interesting questions, possibly revealing trends that could warrant further analysis.

We have determined that the data shows possible bias of judge’s decisions when compared to the rater’s decisions, based on the nationality of the applicant.

Data Biography

The data observes the outcomes of Canadian refugee-appeals by the Canadian Federal Court of Appeal, as well as the decision of an independent rater pertaining to the same case. The data was selected in 1991 by Ian Greene, Associate Professor of Political Science at York University, and by Paul Shaffer, Doctoral candidate in Political Science at the University of Toronto by systematic random sample from approximately 2000 applications filed in 1990.

Some possibly interesting information we could not locate includes the ancestoral nationality of the judges to link to possible bias, and other background information on the judges.

Data Directory

7 variables and 384 observations were included in the dataset used in this report.

Variables:

judge: Name of judge hearing case. Gives values: Desjardins, Heald, Hugessen, Iacobucci, MacGuigan, Mahoney, Marceau, Pratte, Stone, Urie.

nation: Nation of origin of claimant. Gives values: Argentina, Bulgaria, China, Czechoslovakia, El.Salvador, Fiji, Ghana, Guatemala, India, Iran, Lebanon, Nicaragua, Nigeria, Pakistan, Poland, Somalia, Sri.Lanka.

rater: Judgment of independent rater. Gives ‘no’ when case has no merit. Gives ‘yes’ when case has some merit (leave to appeal should be granted).

decision: Judge’s decision. Gives ‘no’ when leave to appeal not granted. Gives ‘yes’ when leave to appeal granted.

language: Language of case. Gives values: English, French.

location: Location of original refugee claim. Gives values: Montreal, other, Toronto.

success: Logit of success rate, for all cases from the applicant’s nation. Gives a numerical value on (-inf, inf).

Interesting Questions

Analyze the difference in rate between a judge granting a leave for appeal and the rater making the same decision with respect to nation. Do the judges appear to be biased based on nation?

Look at the counts for judge and rater approvals for leave to appeal a refugee claim denial. How to the differences by nation in the previous question compare with the differences by judge?

Data Analysis

We created a percent_bias variable that calculated the percentage increase (or decrease if negative) of the judge’s rate of decisions to approve leave for appeal in percent compared to the rater’s rate of making the same decision, organized by nation of applicant.

nation rater_success judge_success percent_bias
Argentina 60.0 31 -48.3
Bulgaria 13.9 11 -20.9
China 22.1 27 22.2
Czechoslovakia 41.7 60 43.9
El.Salvador 53.8 26 -51.7
Fiji 0.0 34 Inf
Ghana 33.3 23 -30.9
Guatemala 20.0 13 -35.0
India 66.7 37 -44.5
Iran 31.2 34 9.0
Lebanon 28.2 25 -11.3
Nicaragua 66.7 17 -74.5
Nigeria 42.9 23 -46.4
Pakistan 75.0 38 -49.3
Poland 36.4 14 -61.5
Somalia 37.9 27 -28.8
Sri.Lanka 42.9 32 -25.4
Table 1: This table shows the percent increase (or decrease if negative) of rate of approval for leave to appeal by judges compared to the rate of approval by the rater.

The numerical comparison shown in Table 1 of the approval percentage of judges and rater shows a much lower rate of approval by the judges: only for 3 of the 17 nations have the judges shown possible positive bias in rates of approval for leave of appeal. Most judge bias seeems to be negative, some percent decreases in approval rate even hit as high as over 74%!

A visual representation of the aprroval counts of the judges compared with the approval counts of the raters follows, allowing the reader to better visualize the differences.

Figure 2: This barplot compares judge approval percentage rates with those of the rater, organized by nation.

The barplot clearly shows that for most countries, the rater approves leaves for appeal with much higher frequency than the judges, showing that many applicants that get denied may have significant merit in there cases, and may deserve an approval.

We will now take a look at a comparison of approval counts by judge with denial counts by judge.

Desjardins Heald Hugessen Iacobucci MacGuigan Mahoney Marceau Pratte Stone Urie
no 22 25 50 26 53 17 10 36 25 6
yes 24 11 12 3 17 13 15 6 8 5
Table 3: This table shows the approval counts for leaves for appeal by judge compared with denials.

This table shows that judges lean towards denial most of the time, as judges have approved leaves of appeal more than denying them in only 2 of the 10 cases, which is in line with the previous analysis of the percentage approval rate data. There seems to be a possible negative bias towards approval when compared with rater decisions, which means that it is possible that judges seem to deny leaves to appeal even when the case may have merit. A graphical representation of this table follows for easy visualization.

Figure 4: This barplot compares judge approval counts with judge denials for leaves to appeal, organized by judge.

This barplot is a good representation of the shift towards denials that the judges seem to employ when hearing cases, signalling a possible bias or other issues with the hearings.

Conclusions

In conclusion, the analysis seems to show a possible strong bias in the judge’s decisions compared to that of the impartial rater. The rater decision data shows that there is merit in a large amount of the cases that are denied permission to appeal.

Some comparisons of the data were not ideal. A better comparison between success rate for all cases by nation would be to rater outcome for all cases by nation, instead of what we have computed with the data given to us. Regardless, there is a large difference in the rates sometimes, with percentage decrease in rate of approval by judge in comparison with the rater climbing as high as 74.5%, showing a possible bias by the judges.

Some further questions to study may include an investigation of the reasons of the judges’ bias if further analysis determines them to exist. We are unable to determine if the biases are social or due to incompetence in the role.