And we mean really every tree!

When Timm, Laura, Elber and I first ran the @everytreenyc Twitter bot almost a year ago, we knew that it wasn’t actually sampling from a list that included every street tree in New York City. The Parks Department’s 2015 Tree Census was a huge undertaking, and was not complete by the time they organized the Trees Count! Data Jam last June. There were large chunks of the city missing, particularly in Southern and Eastern Queens.

The bot software itself was not a bad job for a day’s work, but it was still a hasty patch job on top of Neil Freeman’s original Everylotbot code. I hadn’t updated the readme file to reflect the changed we had made. It was running on a server in the NYU Computer Science Department, which is currently my most precarious affiliation.

On April 28 I received an email from the Parks Department saying that the census was complete, and the final version had been uploaded to the NYC Open Data Portal. It seemed like a good opportunity to upgrade.

Over the past two weeks I’ve downloaded the final tree database, installed everything on Pythonanywhere, streamlined the code, added a function to deal with Pythonanywhere’s limited scheduler, and updated the readme file. People who follow the bot might have noticed a few extra tweets over the past couple of days as I did final testing, but I’ve removed the cron job at NYU, and @everytreenyc is now up and running in its new home, with the full database, a week ahead of its first birthday. Enjoy the dérive!

Online learning and intellectual honesty

In January I wrote that I believe online learning is possible, but I have doubts about whether online courses are an adequate substitute for in-person college classes, let alone an improvement. One of those doubts concerns trust and intellectual honesty.

Any course is an exchange. The students pay money to the college, the instructor gets a cut, and the students get something of value in return. What that something is can be disputed. In theory, the teacher gives the students knowledge: information and skills.

In practice, some of the students actually expect to receive knowledge in exchange for their tuition. Some of them want knowledge but have gotten discouraged. Some wouldn’t mind a little knowledge, but that’s not what they’re there for. Others just have no time for actual learning.

If they’re not there for knowledge, why are they there? For credentials. They want a degree, and the things that go with a degree and make it more valuable for getting a good job: a major, a course list, good grades, letters of recommendation, connections.

If learning is not important, or if the credentials are urgent enough, it is tempting to skip the learning, just going through the motions. That means pretending to learn, or pretending that you learned more than you did. Most teachers have encountered this attitude at some point.

I have seen various manifestations of the impulse to cheat in every class I’ve taught over the years. Some people might be tempted to treat it like any other transaction. It is hard to make a living while being completely ethical. I fought it for several reasons.

First, I genuinely enjoy learning and I love studying languages, and I want to share that enjoyment and passion with my students. Second, many of my students have been speech pathology majors. I have experienced speech pathology that was not informed by linguistics, and I know that a person who doesn’t take linguistics seriously is not fit to be a speech pathologist.

If that wasn’t enough, I was simply not getting paid enough to tolerate cheating. At the wages of an adjunct professor, I wasn’t in it for the money. I was doing it to pass on my knowledge and gain experience, and looking the other way while students cheated was not the kind of experience I signed up for.

I’ve seen varying degrees of dishonesty in my years of teaching. In one French class, a student tried to hand in an essay in Spanish; in his haste he had chosen the wrong option on the machine translation app. I developed strategies for deterring cheating, such as multiple drafts and a focus on proper citation. But I was not prepared for how much cheating I would find when I taught an online course.

The most effective deterrent was simply to get multiple examples of a student’s work: in class discussions, in small group work, in homeworks and on exams. That allowed me to spot inconsistent quality that might turn out to be plagiarism.

In these introductory linguistics courses, the homeworks themselves were minor exercises, mainly for the students to get feedback on whether they had understood the reading. If a student skipped a reading and plagiarized the homework assignment, it would usually be obvious to both of us when we went over the material in class. That would give the student feedback so that they could change their habits before the first exam.

The first term that I taught this course online, I noticed that some students were getting all the answers right on the homeworks. I was suspicious, but I gave the students the benefit of the doubt. Maybe they had taken linguistics in high school, or read some good books.

Then I noticed that the answers were all the same, and I began to notice quirks of language that didn’t fit my students. One day I saw that the answers were all in an unusual font. I googled one of the quirky phrases and immediately found a file of answers to the questions for that chapter.

I started searching around and found answers to every homework in the textbook. These students were simply googling the questions, copying the answers, and pasting them into Blackboard. They weren’t reading and they weren’t discussing the material. And it showed in their test results. But because this was a summer course, they didn’t have time to recover, and they all got bad grades.

I understood where they were coming from. They needed to knock out this requirement for their degree. They didn’t care about linguistics, or if they did, they didn’t have time for it. They wanted to get the work out of the way for this class and then go to their job or their internship or their other classes. Maybe they wanted to go drinking, but I knew these Speech Pathology students well enough to know that they weren’t typically party animals.

I’ve had jobs where I saw shady practices and just went along with it, but in this case I couldn’t do that, for the reasons I gave above. My compensation for this work wasn’t the meager adjunct pay that was deposited in my checking account every two weeks. It was the knowledge that I had passed on some ideas about language to these students. It was also the ability to say that I had taught linguistics, and even online.

The only solution I had to the problem was to write my own homework questions, ones that could be answered online, but where the appropriate answers couldn’t be found with a simple Google search.

The next term I taught the course online I had to deal with students sharing answers – not collaborating in the groups I had carefully constructed so that the student finishing her degree in another state could learn through peer discussion, but where one student simply copied the homework her friend had done. They did it on exams too, where they were supposed to be answering the questions alone. This meant that I also had to come up with questions where the answers were individual and couldn’t be copied.

I worked hard at it. My student evaluations for the online courses were pretty bad for that first summer, and for the next term, and the one after that. But the term after that they were almost as good as the ones for my in-person courses.

Unfortunately, that’s when I had to tell my coordinator that I couldn’t teach any more online courses. Because to teach them right required a lot of time – especially if every assignment has to be protected against students googling the answers or shouting them to each other across the room.

The good news is that in this whole process I learned a ton of interesting things about language and linguistics, and how to teach them. I’ve found that many of the strategies I developed for online teaching are helpful for in-person classes. I’m planning to post about some of them in the near future.

The Photo Roster, a web app for Columbia University faculty

Since July 2016 I have been working as Associate Application Systems in the Teaching and Learning Applications group at Columbia University. I have developed several apps, including this Photo Roster, an LTI plugin to the Canvas Learning Management System.

The back end of the Photo Roster is written in Python and Flask. The front end uses Javascript with jQuery to filter the student listings and photos, and to create a flash card app to help instructors learn their students’ names.

This is the third generation of the Photo Roster tool at Columbia. The first generation, for the Prometheus LMS, was famously scraped by Mark Zuckerberg when he extended Facebook to Columbia. To prevent future release of private student information, this version uses SAML and OAuth2 to authenticate users and securely retrieve student information from the Canvas API, and Oracle SQL to store and retrieve the photo authorizations.

It would be a release of private student information if I showed you the Roster live, so I created a demo class with famous Columbia alumni, and used a screen recorder to make this demo video. Enjoy!

I just have to outrun your theory

The Problem

You’ve probably heard the joke about the two people camping in the woods who encounter a hungry predator. One person stops to put on running shoes. The other says, “Why are you wasting time? Even with running shoes you’re not going to outrun that animal!” The other replies, “I don’t have to outrun the animal, I just have to outrun you.”

For me this joke highlights a problem with the way some people argue about climate change. First of all, spreading uncertainty and doubt against competitors is a common marketing tactic, and as Naomi Orestes and Erik Conway documented in their book Merchants of Doubt, that same tactic has been used by marketers against concerns about smoking, DDT, acid rain and most recently climate change.

In the case of climate change, as with fundamentalist criticisms of evolution, there is a lot of stress on the idea that the climatic models are “only a theory,” and that they leave room for the possibility of error. The whole idea is to deter a certain number of influential people from taking action.

That Bret Stephens Column

The latest example is Bret Stephens, newly hired as an opinion columnist by New York Times editors who should really have known better. Stephens’s first column is actually fine on the surface, as far as it goes, aside from some factual errors: never trust anyone who claims to be 100% certain about anything. Most people know this, so if you claim to be 100% certain, you may wind up alienating some potential allies. And he doesn’t go beyond that; I re-read it several times in case I missed anything.

Since all Stephens did was to say those two things, none of which amount to an actual critique of climate change or an argument that we should not act, the intensely negative reactions it generated may be a little surprising. But it helps if you look back at Stephens’s history and see that he’s written more or less the same thing over and over again, at the Wall Street Journal and other places.

Many of the responses to Stephens’s column have pointed out that if there’s any serious chance of climate change having the effects that have been predicted, we should do something about it. The logical next step is talking about possible actions. Stephens hasn’t talked about any possible actions in over fifteen years, which is pretty solid evidence of concern trolling: he pretends to be offering constructive criticism while having no interest in actually doing anything constructive. And if you go all the way back to a 2002 column in the Jerusalem Post, you can see that he was much more overtly critical in the past.

Stephens is very careful not to recommend any particular course of action, but sometimes he hints at the potential costs of following recommendations based on the most widely accepted climate change models. Underlying all his columns is the implication that the status quo is just fine: Stephens doesn’t want to do anything to reduce carbon emissions. He wants us to keep mining coal, pumping oil and driving everywhere in single-occupant vehicles.

People are correctly discerning Stephens’s intent: to spread confusion and doubt, disrupting the consensus on climate change and providing cover for greedy polluters and ideologues of happy motoring. But they play into his trap, responding in ways that look repressive, inflexible and intolerant. In other words, Bret Stephens is the Milo Yiannopoulos of climate change.

The weak point of mainstream science

Stephens’s trolling is particularly effective because he exploits a weakness in the way mainstream scientists handle theories. In science, hypotheses are predictions that can be tested and found to be true or false: the hypothesis that you can sail around the world was confirmed when Juan Sebastián Elcano completed Magellan’s expedition.

Many people view scientific theories as similarly either true or false. Those that are true – complete and consistent models of reality – are valid and useful, but those that are false are worthless. For them, Galileo’s measurements of the movements of the planets demonstrated that the heliocentric model of the solar system is true and the model with the earth at the center is false.

In this all-or-nothing view of science, uncertainty is death. If there is any doubt about a theory, it has not been completely proven, and is therefore worthless for predicting the future and guiding us as we decide what to do.

Trolls like Bret Stephens and the Marshall Institute exploit this intolerance of uncertainty by playing up any shred of doubt about climate change. And there are many such doubts, because this is actually the way science is supposed to work: highlighting uncertainty and being cautious about results. Many people respond to them in the most unscientific ways, by downplaying doubts and pointing to the widespread belief in climate change among scientists.

The all-or-nothing approach to theories is actually a betrayal of the scientific method. The caution built into the gathering of scientific evidence was not intended as a recipe for paralysis or preparation for popularity contests. There is a way to use cautious reports and uncertain models as the basis for decisive action.

The instrumental approach

This approach to science is called instrumentalism, and its core principles are simple: theories are never true or false. Instead, they are tools for understanding and prediction. A tool may be more effective than another tool for a specific purpose, but it is not better in any absolute sense.

In an instrumentalist view, when we find fossils that are intermediate between species it does not demonstrate that evolution is true and creation is false. Instead, it demonstrates that evolution is a better predictor of what we will find underground, and produces more satisfying explanations of fossils.

Note that when we evaluate theories from an instrumental perspective, it is always relative to other theories that might also be useful for understanding and predicting the same data. Like the two people running from the wild animal, we are not comparing theories against some absolute standard of truth, but against each other.

In climate change, instrumentalism simply says that certain climate models have been better than others at predicting the rising temperature readings and melting glaciers we have seen recently. These models suggest that it is all the driving we’re doing and the dirty power plants we’re running that are causing these rising temperatures, and to reduce the dangers from rising temperatures we need to reconfigure our way of living around walking and reducing our power consumption.

Evaluating theories relative to each other in this way takes all the bite out of Bret Stephens’s favorite weapon. He never makes it explicit, but he does have a theory: that we’re not doing much to raise the temperature of the planet. If we make his theory explicit and evaluate it against the best climate change models, it sucks. It makes no sense of the melting glaciers and rising tides, and has done a horrible job of predicting climate readings.

We can fight against Bret Stephens and his fellow merchants of doubt. But in order to do that, we need to set aside our greatest weakness: the belief that theories can be true, and must be proven true to be the basis for action. We don’t have to outrun Stephens’s uncertainty; we just have to outrun his love of the status quo. And instrumentalism is the pair of running shoes we need to do that.