Wednesday, December 26, 2007

What is experimental philosophy?

Experimental philosophy:

Suppose the chairman of a company has to decide whether to adopt a new program. It would increase profits and help the environment too. “I don’t care at all about helping the environment,” the chairman says. “I just want to make as much profit as I can. Let’s start the new program.” Would you say that the chairman intended to help the environment?

O.K., same circumstance. Except this time the program would harm the environment. The chairman, who still couldn’t care less about the environment, authorizes the program in order to get those profits. As expected, the bottom line goes up, the environment goes down. Would you say the chairman harmed the environment intentionally?

in one survey, only 23 percent of people said that the chairman in the first situation had intentionally helped the environment. When they had to think about the second situation, though, fully 82 percent thought that the chairman had intentionally harmed the environment. There’s plenty to be said about these interestingly asymmetrical results.
It’s part of a recent movement known as “experimental philosophy.”

This is pretty interesting. I'm wondering why it's "philosophy" though. Isn't this just experimental psychology, applied to topics of intention and theory of mind? And if you want to do it, wouldn't a psych program be better training for learning how to read fMRI papers and experimental design? But maybe a philosophy degree makes you smarter. (That's how I understood Richard Rorty's great review of Marc Hauser's book on moral psychology.)

Here's a good overview of a variety of work in the field; here are some more thoughts on what "x-phi" is. I suspect it's a thing special to analytic philosophy, which embroiled itself in all sorts of topics that rely heavily on appeals to intuition, but where empiricism might work a bit better. (E.g. any actual improvements in cognitive science should make philosophy of mind less important.)

Wednesday, December 19, 2007

Data-driven charity

Some ex-hedge fund analysts recently started a non-profit devoted to evaluating the effectiveness of hundreds of charities, and apparently have been making waves (NYT). A few interesting reports have been posted on their website, -- they make recommendations for which charities where donors' money is used most efficiently for saving lives or helping the disadvantaged.

(Does anyone else have interesting data on charity effectiveness? I've heard that evaluations are the big thing in philanthropy world now, and certainly the Gates Foundation talks a lot about it.)

Obviously this sort of evaluation is tricky, but it has to be the right approach. The NYT article makes them sound like they're a bit arrogant, which is too bad; on the other hand, any one who makes claims to have better empirical information than the established wisdom will always end up in that dynamic. (OK, so I love young smart people who come up with better results than a conservative, close-minded establishment. Or at least I'm a sucker for that story.)

This particular methodological criticism (from the article) struck me as odd:

“I think in general it’s a good thing,” said Thomas Tighe, president and chief executive of Direct Relief International, an agency that GiveWell evaluated but did not recommend. Like others in the field, however, Mr. Tighe has reservations about GiveWell’s method, saying it tends to be less a true measure of a charity’s effectiveness than simply a gauge of the charity’s ability to provide data on that effectiveness.

I think it's fine to penalize an organization for failing to provide data on its effectiveness. Isn't the burden of proof on them, to show that they're actually doing something useful? I guess it comes down to whether you believe empirical evaluation is necessary for organizational effectiveness. I believe this wholeheartedly.

The GiveWell people have an interesting argument that altruistic actions have a particularly poor feedback loop, which kills learning/optimization; therefore, you need to undertake explicit evaluative efforts. From their blog:

Now imagine an activity that consists of investing without looking at your results. In other words, you buy a stock, but you never check whether the stock makes money or loses money. You never read the news about whether the company does well or does poorly. How much would you value someone with this sort of experience - buying and selling stocks without ever checking up on how they do? Because that’s what “experience in philanthropy” (or workforce development, or education) comes down to, if unaccompanied by outcomes evaluation.

The peculiar thing about philanthropy is that because you’re trying to help someone else - not yourself - you need the big expensive study, or else you literally have no way of knowing whether what you did worked. And so, no way of learning from experience.

I really like this point -- which is easier to notice, that you're bankrupt or that someone else is? That your own business is doing well/badly, or that your beneficiaries are doing well/badly? Self-regarding actions get automatic evaluation but altruistic actions don't, presumably because, even if we care enough to give to others, we do not care enough to expend energy evaluating their outcomes down the line. But we really care about our own personal outcomes. Yet another example of human preferences being more selfish than altruistic; well, what's new?

Sunday, December 09, 2007

Race and IQ debate - links

William Saletan, a writer for Slate, recently wrote a loud series of articles on genetic racial differences in IQ in the wake of James Watson's controversial remarks. It prompted lots of discussion; here is an excellent response from Richard Nisbett, a leading authority in the field on the environmentalist side of the debate.

More academic articles: Rushton and Jensen's 2005 review of evidence for genetic differences; and what I've found to be the most balanced so far, the 1995 APA report Inteligence: Knowns and Unknowns which concludes for all the heated claims out there, the scientific evidence tends to be pretty weak.

Blog world: Funny title from Brad DeLong; and another Slate response to Saletan and Rushton/Jensen.

The politics of the race and intelligence question is a huge distraction from trying to find out the actual truth of the matter. But I suppose the political implications are why it attracts so much attention -- for good or bad.

The most interesting thing I learned is the Flynn Effect: IQ's as measured by standardized tests have been consistently rising, in all populations throughout the world ever since IQ tests were invented. This implies some sort of non-genetic determiners -- perhaps education and environmental factors -- can have a very large effect on intelligence. Here is a good overview from Ulric Neisser, the lead author on the APA report.