Friday, December 10, 2010

On homelessness and the ethics of randomized controlled

The NYT has a very interesting article up on a randomized controlled trial for a homeless program in NY.

A social agency in New York runs a program which helps poor people with housing. The objective of the program is to prevent people from becoming homeless. It competes and works with several other programs to achieve this objective. However, they face a small problem: they have no idea if their program works. So, the organization has decided to randomly assign some people to enter their program, while denying others the right. 

You've read that correctly: they randomly chose some people by some lot or another to receive assistance, while others are denied it by virtue of the same draw. Dear reader, your initial reaction is probably that this is terribly cruel. This is certainly the reaction of many in NYC who are now opposing this trial. I'd like to argue that this is the most ethical course of action, in terms of both procedure or means, and ends. 

Two arguments recommend my position. First, while we should be concerned that social programs are just in their procedure -- think about the difference between feeding the poor respectfully and throwing them food from the back of a truck -- we should be most concerned that they are effective in their outcomes. Randomized controlled trials tell us more than any other method ever can whether a program has a negative or positive causal effect. This is no small matter. We waste billions of dollars every year on social programs that are meant to make peoples' lives better with virtually no systematic evidence that these programs work. If you believe that policy makers have an obligation to help the citizens who rely on them, then good intentions are not enough. We need to do our best to figure out what programs actually work and which do not. RCTs are the best way to do this. 

Second, this program suffers from scarcity, such that not everyone who applies can be accepted, despite their suitability for the program. So, we need some mechanism to decide in a fair manner who is admitted into the program and who is not. In the absence of randomization -- in which everyone has an equal chance -- some other criteria must be used. Now, a policy maker could select on need or suitability, or some other metric, but selecting on this eliminates the ability to figure out whether a program is effective, because you can't be sure that any differences in outcomes between those in the program and those out are a function of the program or the differences on which entrance into the program was granted. Alternately, a policy maker could grant access to someone they simply like more than others. This assuredly happens more often than we'd like to admit. I am sure there are other criteria, but the point remains that these other methods either limit our ability to measure outcomes, or they are arbitrary. In the worst case, they are both of these things. Contrast this with random assignment, in which everyone has equal chances of service. This is many scores more ethically defensible than some arbitrary selection mechanism. 

Experiments may seem unfair at first glance. After all, we don't like to think of humans as subjects in an experiment. But better subjects than participants in a series of arbitrarily administered and monitored programs bound for continued failure. That's more cruel than a coin flip. 


Chris said...

Best normative defence of randomness that I've seen in the last little while is Andrew Rehfeld's book, 2005. The Concept of Constituency: Political Representation, Democratic Legitimacy and Institutional Design. (Cambridge: Cambridge University Press).

Anonymous said...

You might want to look at the Canadian homeless trial (At Home/Chez Soi). The central question in that study (unlike the New York study) is whether people who are housed through the study-implemented system do have better mental health and quality of life than those who find their way as usual. There are intra-group comparisons between different provision models, but these duplicate services in some cities, and vary between each of the study sites. Leaving aside the normative justifications, a question on which we disagree, there are serious questions about the study design, as there are in the New York study. The sample sizes are small relative to the analysis metrics being used, i.e., the studies seem to be underpowered. Underpowered studies are notorious for producing paradoxical or uninterpretable results. You mention that the New York study is conducted in a state of scarcity. The vast majority of homelessness researchers argue that it is precisely housing availability that predicts homeless. And yet, its does not appear that the RCTs consider structural factors are put into the equation at all. (For example, in Vancouver, according to a report a few days ago, new housing created by the City and partners has reduced homelessness from about 1400 to under 200.)
So for testing drugs, RCTs are important (but even here, after Vioxx and Crestor scandals, the trend is toward alternate designs like cross-over trials). But when we are studying social phenomenon? RCT commits the Type III error.

Marta Clavero said...

scholastic ability test in mathematics, announced the result of the survey asking 220 students to vote for the celebrities who seem to be good/bad at math.

For the category asking student to vote for celebrities VoucherCodesNet
personalized bath robes

Geschäftsgelegenheit global
replica breitling navitimer

smafolk sale
best multivitamins

send text message online
Cloud Hosted Desktop

solar cell how to make

Latest Discount Codes
Fontaine eau

teacher courses

Accident Insurance
Miami Interior Designer

text free
how to last longer in bed for men

alarm systems for home
strapless bras
who seem to be good at math, actress Kim Tae Hee ranked first with 43.4% of the votes, and Lee Seung Gi followed with 30.9% of the votes. According to the students, “Lee Seung Gi looks smart and intelligent” and “Kim Tae Hee went to Seoul National University, she is smart for real.”

On the other hand, for celebrities who seem to be bad at math, Boom ranked first with 45.15% of votes,