Anti-cheating robots

Paul Alper writes:

Surely you would like to comment on the amazing escalation in the anti-cheating tech world. I predict it will be followed by some clever software which makes it appear that the student enrolled is actually the one taking the exam. Reminiscent of the height of the cold war of counter weapons and counter-counter weapons. You may believe that the students sitting in your class are actually there but how long will it be before someone comes up with software producing holograms of your students? Of of you?

Alper adds:

I also came across this amazing link concerning Big Data and profiling via “Stoplight.”

Inasmuch as you and most of your blog followers are Bayesian, note the chilling way priors exist for a student’s grade:

The profile shows a red light, a green light, or a yellow light based on things like have you attempted to take the class before, what’s your overall level of performance, and do you fit any of the demographic categories related to risk.

And, it appears that said priors may never get overridden by subsequent data:

These profiles tend to follow students around, even after folks change how they approach school.

What struck me is how they decide who gets monitored. The first link, a news article by Natasha Singer, describes a pretty invasive system installed at Rutgers University:

Once her exam started, Ms. Chao said, a red warning band appeared on the computer screen indicating that Proctortrack was monitoring her computer and recording video of her. To constantly remind her that she was being watched, the program also showed a live image of her in miniature on her screen. . . .

As universities and colleges around the country expand their online course offerings, many administrators are introducing new technologies to deter cheating. The oversight, administrators say, is crucial to demonstrating the legitimacy of an online degree to students and their prospective employers.

I think what they’re really saying is that they don’t want to pay instructors and teaching assistants. Indeed:

Ms. Chao [a student interviewed for the news article] said administrators had since offered to provide her with a live human proctor for a fee of $40 per exam.

So, yeah, the robot is replacing the human teacher. Seems like a problem, though, in that the teaching assistant doesn’t just verify students for exams. The T.A. is also supposed to get to know the student a bit and offer some individualized instruction.

I was also amused by the Rutgers connection, as I was reminded of Frank Fischer, an elderly professor of political science who was caught copying big blocks of text (with minor modifications) from others’ writings without attribution. This all happened several years ago, but Fischer is still listed as a professor on the Rutgers website. It seems a bit unfair that the students there are subject to Proctortrack and the faculty can just do whatever they want.

P.S. Thinking more about it, I’m not “amused” by the Rutgers connection, I’m actually angry that they’re surveilling students in this way while tolerating plagiarism by a professor.

9 thoughts on “Anti-cheating robots

  1. This talk of cheating and the Col War reminds me of a Ukrainian macro professor I once had who bragged that he was better at cheating than any of his students could ever be.
    Before our first test, he brought in this watch he’d made in college. He’d taken out most of the insides, replaced the watch face with a magnifying lens, wrote notes in tiny handwriting on a piece of cigarette paper, and wound it around the mechanism, so he could scroll through his notes while looking like he was winding his watch.

    • This guy reminds me a little of a guy I knew in high school. He enjoyed car racing and figuring out clever ways to cheat. He was very bright (we were at the same table in physics lab), and I got the impression that he wouldn’t cheat anyone who didn’t deserve come-up-ance — in other words, the type of person who might make a good white hat hacker. I often wonder how he ended up in life.

      • Interesting. I knew a similar guy who would write notes on very tiny pieces of paper before a test. When I once said to him that if he spent half the time studying that he did on the tiny little notes, he would not need to cheat, he responded that he did not use them during the test as he would remember the information. Like much of human behavior, he did not know why he preferred to make the tiny little notes. Though it might be telling that his later career involved building things requiring similar skills.

  2. “do you fit any of the demographic categories related to risk”

    Just to clarify… they are sending predictions of student quality to professors before the class begins based on racial profiling?

    Also, you missed another chance to point out hypocrisy and insanity at my least favorite University: “Arizona State is using Facebook data to improve retention by understanding a student’s social network. They take not participating in a social network as a sign that students might be thinking of dropping out.”

    Good grief. Wasting time on Facebook is a sign you’ll stay in school? They could sure use someone who does cultural consciousness training there… wait, the well-connected media-savvy fraud who… oh snap Google: http://www.azcentral.com/story/news/local/tempe/2015/07/10/edu-popular-asu-professor-matthew-whitaker-demoted-plagiarism-incident/30000997/

    • Which raises this question that has always bugged me:

      Suppose I, as a Professor, analyse my last three decades of cheating data on tests & discover that 90% of the copied assignments were among (say) Indian students although the percent of Indian students in my classes were only 20%.

      Subsequent to this, if I instruct the graders to pay especial attention to Indian student assignments, is that unfair / unethical racial profiling or a rational pragmatic response.

      The question bothers me in contexts way beyond cheating on home-work.

  3. The Proctortrack – Frank Fischer analogy seems a bit of a stretch.

    It’s like asking why anyone at Columbia submits honest receipts for reimbursement any more? When Columbia still retains Sudhir Venkatesh on the faculty.

  4. “Inasmuch as you and most of your blog followers are Bayesian, note the chilling way priors exist for a student’s grade:

    The profile shows a red light, a green light, or a yellow light based on things like have you attempted to take the class before, what’s your overall level of performance, and do you fit any of the demographic categories related to risk.

    And, it appears that said priors may never get overridden by subsequent data:”

    One of the things I’ve discussed with other Bayesians regarding Frequentist vs Bayesian philosophy in statistics is that in Frequentist philosophy of statistics there’s an emphasis on the distribution of an infinite number of future trials. In Bayesian statistics we have a logical basis for a distribution over a single un-repeatable event. Of course there are both Frequentist and Bayesian methods for time-series, but I do think there’s a tendency if your training is Frequentist to collapse time series into repeated trials from a single distribution, whereas I think it’s more natural to model something like academic achievement as a time-series for a Bayesian. People who start out with a lot of Frequentist intuition can carry that over into their Bayesian analyses as well.

    All that is to say that my impression is as much that there’s a problem with a model which treats future achievement as implicitly draws from the same distribution as past achievement (the data model) more than there is a problem with a prior.

    • True if the model is “past approximates future”. But if you take prior data on the whole “time series” and take the predictive properties from there then I don’t see where is the statistical problem. The problems of fairness cannot be just as easily eliminated. And then we will run into a heightened scrutiny paradox: the more you investigate people fitting certain profile, the more likely you find cheaters, the more your algorithms will investigate that particular group etc. But this effect is so obvious, I hope people responsible are correcting for that. We don’t want to repeat successes of the war on drugs, do we?

Leave a Reply

Your email address will not be published. Required fields are marked *