Good reads of the week #5

We’re not the good guys: Osaka shows up problems of press conferences

[…]the world No 2 Naomi Osaka announced that she would be boycotting press conferences at the French Open in order to preserve her mental health.

On Monday night, after being fined and threatened with expulsion, Osaka quit the tournament altogether.

And so the modern press conference is no longer a meaningful exchange but really a lowest‑common‑denominator transaction: a cynical and often predatory game in which the object is to mine as much content from the subject as possible. Gossip: good. Anger: good. Feuds: good. Tears: good. Personal tragedy: good. Meanwhile the young athlete, often still caught up in the emotions of victory or defeat, is expected to answer the most intimate questions in the least intimate setting, in front of an array of strangers and backed by a piece of sponsored cardboard.

There’s an odd ritualistic quality to all this: the same characters sitting in the same seats, the same cliches, all these millions of wasted words, the unopened bottles of mineral water. Is there not a better way of doing this?

This dynamic is only exacerbated in women’s tennis, a highly visible enterprise that takes place not just in a largely white male space, but a white‑male‑with‑free‑food space. That sense of voracious, engorged entitlement often manifests itself in exceptionally creepy ways. Question: “I noticed you tweeted a picture. Are you prepared that if you go on a long run you may be held up as a sex symbol, given you’re very good looking?” (Genie Bouchard, Wimbledon 2013.) Question: “You’re a pin-up now, especially in England. Is that good? Do you enjoy that?” (A 17-year-old Maria Sharapova, Wimbledon 2004.) And of course there are plenty of decent, curious journalists out there doing decent, curious things. In a way, this is what makes the chronic lack of self‑awareness so utterly self-defeating. Read the room. We are not the good guys here. We are no longer the power. And one of the world’s best athletes would literally rather quit a grand slam tournament than have to talk to the press. Rather than scrutinising what that says about her, it might be worth asking what that says about us.

If Power to the Person was about technology empowering individuals, and The Great Online Game was about how the internet blurs the line between work and play, this essay is about how we play the game as teams of individuals or small groups

The 2021 AI Index provides insight into jobs, publications, diversity, and more

Number of AI Journal publications, 2000-20
Number of newly funded AI companies in the world, 2015-20
Global corporate investment in AI by investment activity, 2015-20

Can Exposure to Celebrities Reduce Prejudice? The Effect of Mohamed Salah on Islamophobic Behaviors and Attitudes


Can exposure to celebrities from stigmatized groups reduce prejudice? To address this question, we study the case of Mohamed Salah, a visibly Muslim, elite soccer player. Using data on hate crime reports throughout England and 15 million tweets from British soccer fans, we find that after Salah joined Liverpool F.C., hate crimes in the Liverpool area dropped by 16% compared with a synthetic control, and Liverpool F.C. fans halved their rates of posting anti-Muslim tweets relative to fans of other top-flight clubs. An original survey experiment suggests that the salience of Salah’s Muslim identity enabled positive feelings toward Salah to generalize to Muslims more broadly. Our findings provide support for the parasocial contact hypothesis—indicating that positive exposure to out-group celebrities can spark real-world behavioral changes in prejudice.

Evidence of brain damage after high-altitude climbing by means of magnetic resonance imaging

Results: Only 1 in 13 of the Everest climbers had a normal MRI […]

Conclusions: We conclude that there is enough evidence of brain damage after high altitude climbing; the amateur climbers seem to be at higher risk of suffering brain damage than professional climbers.

This week in Tweets #3

It’s staggering how successful the Airpod empire is. I have the Airpod Pros and am very happy with them. If they were to break today I would replace them tomorrow. Not a lot of products have that staying power in my home.

The Ezra Klein Show – Is A.I. the Problem? Or Are We?

Worth a listen!

And the famous quote is, “If we build a machine to achieve our purposes with which we cannot interfere once we’ve started it, then we had better be quite sure that the purpose we put into the machine is the thing we really desire.” And this has continued through the early 20th century, as the thought experiment of the Paperclip Maximizer that turns the universe into paperclips, killing everyone in the process.

But to your point, I don’t think we need these thought experiments anymore. We’re now living with these alignment problems every day. So, one example is there’s a facial recognition data set called Labeled Faces in the Wild. And it was collected by scraping newspaper articles off the web and using the images that came with the articles. Later, this data set was analyzed. And it was found that the most prevalent individuals in the data set were the people who appeared in newspaper articles in the late 2000s.

And so, you get issues like there are twice as many pictures of George W. Bush as of all Black women combined. And so, if you train a model on that data set, you think you’re building facial recognition, but you’re actually building George W. Bush recognition. And so, this is going to have totally unpredictable behavior.

There is a computer science research group that has the, I think, somewhat tongue in cheek title of People for the Ethical Treatment of Reinforcement Learning Agents. But there are people who absolutely sincerely think that we should start now thinking about the ethical implications of making a program play Super Mario Brothers for four months straight, 24 hours a day.

Ezra Klein
You talked about one that did Super Mario Brothers, and it’s just caught in this game that has no more novelty. And it’s a novelty seeking robot. And I thought it was so sad.

Brian Christian
Yeah, it just learns to sit there. Because it’s like, well, why would I jump across this little pipe because it’s just the same old shit on the other side. Like, well, I might as well just do nothing. I might as well just kill myself. And there have been reinforcement learning agents that, because of the nature of the environment, essentially learn to commit suicide as quickly as possible. Because there’s a time penalty being assessed for every second that passes that you don’t achieve some goal. And they can’t achieve it, so they’re like, well, the next best thing is to just like die right now.

And again, it’s like we’re somewhere on this slippery slope. I mean, there is this funny thing for me, where the more I study AI, the more concerned I become with animal rights. And I’m not saying that AlphaGo is equivalent to a factory farm chicken or something like that, necessarily. But going back to some of the things we’ve talked about, the dopamine system, some of these drives that are — the fact that we are building artificial neural networks that at least to some degree of approximation are modeled explicitly on the brain. We’re using TD learning, which is modeled explicitly on the dopamine system. We are building these things in our own image.

And so, the odds of them having some kind of subjective experience, I think, are higher than if we were just writing a generic software. This is the huge question of philosophy of mind, is are we going to if we manage to create something with a subjectivity or not? I’m not sure. But these questions, I think, are going to go from seemingly crazy now to maybe on a par with something like animal welfare by the end of the century. I think that’s not a crazy prediction to make.