Unexpected misconceptions in an intro to computer science class

Photo by Sigmund on Unsplash

I grew up in the ’90s. My first computer was monochromatic, with no internet access. I was overjoyed when we finally got Windows 95 and I could learn to play Minesweeper. Now I teach coding to students who watched Youtube as toddlers and got smartphones in middle school. If you plan to teach students how computers work, then you need to know how your students experience technology. Only then can you make the connections that will enable them (and you!) to build on prior knowledge.

Here’s what my students believed at the beginning of this school year.

Misconception 1: It’s not a computer if it doesn’t have internet access.

I begin each year by asking students to define “computer.” Then we check how their definitions would categorize certain examples, so that they can refine their definitions. For example, one student may say, “A computer is a way of getting in contact with people.” So then I ask, “Is a postcard a computer?” They say no. I point out that a postcard fits with their definition. And so the student is encouraged to refine, perhaps by adding the requirement that a computer must use electricity. And so on.

I was surprised to hear that students don’t think something is a computer unless it has internet access! It was quite the conversation: first, my students all agreed that a laptop is a computer. Then I asked if it’s still a computer when the laptop has no internet access — and they unanimously agreed it’s not! The internet has become so pervasive that students consider it inherent to what makes something a computer. Below is our Zoom chat from class. (Hopefully this pandemic ends soon, and future readers will need to be reminded that we taught school on Zoom in 2020–2021.) I pasted the unedited conversation below, with student names redacted for privacy.

I also ask students to define “internet,” which they struggle to do. They are surprised to learn that the internet is just a way for a bunch of computers (and servers) to talk to each other. Which means you still need to define what a computer is.

Misconception 2: Computers are for content, not computation.

My high schoolers constantly generate and share content on myriad internet platforms. To them, the purpose of a computer is to create and share videos, images, and text. They don’t think about the computing power behind business processes, big machines, or smart devices. They think about the interface between them and their phone or laptop. Many of my students believe that computers were originally built because there was no mechanism for instantaneous communication. Some don’t realize that the telephone pre-dated the computer by decades!

It’s fun to surprise students by telling them that the first computers were built for computation. I get lots of “oh, yeah…” when I point out that “computer” and “computation” share a word root. Our species took a big step forward when we built computers to do our rule-based calculations faster and more accurately. And today, in our increasingly complex world, almost everything relies on algorithms. I ask my students: “You may focus on writing a tweet, but how does Twitter decide who sees your tweet and how high up it appears on a given person’s feed? How does Twitter decide whom to suggest you follow next, based on what you tweeted?”

I need my students to understand how much their lives are affected by algorithms. It gives them the necessary impetus to learn about algorithmic bias, so that even if they don’t write algorithms as a career, they’re still sufficiently informed to question algorithms’ outcomes. I do enjoy teaching students to design content generation and presentation (the “front end”), for example using HTML/CSS/JavaScript. But I make sure they know that there’s also a “back end,” and that back-end algorithms drive much, much more of this world than my students are likely to imagine.

Misconception 3: “Coder” and “user” are the same thing.

The phenomenon is this: students can’t tell the difference between where they write their code and where they test their code. Although I always start class with an introduction to the IDE, explaining the difference between the code and the terminal or console, students regularly struggle with the concept. For example: I ask students to write a program that asks a user for their name and then greets the user with their name. Students will put their own name in the code itself, and they are confused when I explain that the code is meant to apply generally to any user.

I’d love to hear if others have an explanation for why this is particularly challenging to students who are new to CS. Perhaps the IDE feels to them like yet another app platform, and they are used to interacting with apps as themselves rather than as abstract designers? Please reach out directly, or provide your ideas in the comments.) I see it in the majority of my high school students who are new to CS, but I haven’t read about it in any CS education books or articles.

A final note: it takes work to reveal these misconceptions.

I only know about my students’ prior computer knowledge because they told me. And they only told me because I asked. The better we unearth our students’ incoming understanding, the better we’ll be at helping them build new knowledge. And — as an added benefit — I get to learn new ways of thinking about technology, too.

I teach physics and computer science in East Harlem, New York. I aim to engage.