I ceaselessly tinker with my courses from semester to semester, and lately I’ve been doing a lot of thinking about my practice with the survey course. In the spirit of full disclosure, I should note that my stance on the survey has always been ambivalent, and there are days where my attitude towards it has been downright jaundiced. My personal ambivalence is reflected in the field at large; the US history survey occupies a weird place in our discipline. In many institutions, it’s simultaneously a required general-education course and the class faculty most often try to avoid teaching. It’s important for non-majors to take it, we argue, because historical literacy is important, and one cannot be an informed, contributing citizen without a basic grounding in US history. If it’s so important, though, why is the survey course treated like a pedagogical hot potato, hurriedly passed on by those who are in a position to refuse putting it on their schedule? I would rather teach two seminars this semester, the Full Prof explains. Well, we’ll just adjunct it out, or find a grad student, the Department Chair shrugs. But woe be to the foolhardy soul who suggests that the traditional survey no longer be required? OMFG DO YOU WANT TO DESTROY OUR SOCIETY?!? WHO WILL TEACH THE CHILDREN?!?
Sure, there are exceptions to the trend, and there are some great examples of the survey taught creatively and quite well (this blog is a veritable showcase of them). But overall, History departments and their faculty tend to have a complicated, love-hate-but-often-hate relationship with the US survey. This stems, I think, from a fundamental misalignment between the survey’s goals and its implementation. We present the course as a guarantee of historical literacy–students need to understand US history to be legitimately educated and informed citizens–but we also sense that the course’s conceptual flaws often prevent us from making good on that guarantee. These flaws spring from our struggle to define exactly what we mean when we talk about “historical literacy.” An example: several years ago, my department was struggling with how to assess our major–how could we demonstrate that our students learned what we said they would learn as a result of majoring in History? One colleague pulled out a ten-page list of terms that he got from somewhere, purporting to be a “historical literacy test.” There were literally hundreds of them, from Hammurabi’s Code to NAFTA and back again. Why don’t we just use this to test them, he asked. After all, don’t we want to prove our students are historically literate? Now, before you roll your eyes right out of your head trying to imagine getting your seniors to do a 750-question historical vocabulary test, think about it. Isn’t this just “historical literacy” carried to its logical conclusion? If you’re not sure, consider how we talk about what we do in the survey course: we “cover” stuff. Today, I covered the slavery and the sectional crisis. We check boxes as we and our students sprint through the nickel-tour of US history. Over there, you’ll see the Civil War. Look quickly–the North won. Any questions? OK…aaannnd we’re walking. Keep up with the group, please.
But coverage does not beget literacy. Nor does it require the type of critical thinking or higher-order learning we seek to inculcate in students as part of their college education. This isn’t a new argument; Lendol Calder, for example, has ably demonstrated the shortcomings of the coverage approach. But as long as we approach historical literacy as a primarily content-driven phenomenon, we’ll never fully get out from underneath the coverage model. And in our day-to-day practice, I’d argue, it’s still the default standard for the survey. In departments that employ a standard book and/or syllabus, especially for sections taught by contingent faculty, “coverage” remains paramount. We implicitly assume that students who complete the US survey have thoroughly “covered” US history. They’ve been exposed to all of the stuff they need to be historically literate. They could, if asked, complete the type of assessment my departmental colleague proposed. And any modification to, or getting rid of, the survey, as one recent polemic suggests, would be simply caving into a leftist cabal and surrendering our discipline’s integrity. We must have the survey, we cry. Without it, how will our students learn about what makes us, well….us?
That’s the story we tell ourselves. And it’s pure fantasy.
Leave aside for the moment the fact that merely putting material in front of students is no way to guarantee learning, and consider that no other discipline makes this type of claim about their survey course. My colleagues in the Psychology Department don’t say that PSYC 101 renders a student proficient enough to become a therapist. Biology 101 isn’t sold as the only course one needs before they can operate on someone. I can’t take Geology 101 and expect the USGS to let me come along when their team goes down into the volcano’s caldera (alas). But somehow the history survey renders a student perfectly conversant–nay, intimately familiar with–US History? That’s absurd, and we need to stop presenting–and defending–the US survey this way
Like other disciplines, what our survey course should do is promote our field’s habits of mind–to get our students thinking like historians. We shouldn’t aim for mere civic boosterism, or for telling them “what happened” and why. At its best, the US history survey should develop our students’ metacognitive habits; they should be learning about how to learn. The survey course should engage students in seeing past and present as occupying a continuum, as opposed to separate and distinct silos. It should develop students’ abilities to empathize with different perspectives and understand the ways in which people have constructed their identities. It should introduce students to not only an appreciation of studying the past because of its inherent interest, but because the skills with which they do so are critical components for their academic success.
But in order for the survey course to accomplish all this, we have to let go of “covering” the material. We have to admit to ourselves that no matter how witty or interesting we think our detailed narrative lectures are, they aren’t the medium most suited for encouraging historical thinking. We have to understand that even if our students don’t hear from us every detail about the Stamp Act Riots/Kansas-Nebraska Act/Election of 1896/whatever our favorite thing is, that American society will find a way to soldier on. So what might a survey that de-emphasizes coverage in favor of metacognitive engagement look like? It might be a treatment of a period of US History structured around a question: what does it mean to be “American?” What has “Freedom” meant throughout US history? Or perhaps it would have a thematic focus: race in US history, US history in global context, immigration and migration. Or it could something that gets students to interrogate concepts they’d never thought to question before; Caleb McDaniel’s backwards survey is an excellent example of complicating seemingly familiar material to more deeply engage students. But however it looks, emancipating our courses from the shibboleth of coverage is, I would argue, the sine qua non of effective pedagogy for college-level History. When the survey becomes akin to a history workshop, when it looks and sounds like a science lab where students are collaborating and doing history rather than just hearing about it, then we’re doing right by our students and our discipline.
How do you approach the survey course? Do you wrestle with “coverage?” Let us know in the comments! Also, Kathryn Jewell, on this very blog, is collecting data on precisely these issues. Click here to add your input to the survey before it closes on April 15.