Oppia

In the fall, I will most likely be teaching the survey online.  One of the difficult parts of teaching a a large lecture course online or onsite, when both time constraints and classroom size discourage discussion, is ensuring that students are doing and understanding the reading.  While some instructors prefer to simply wait for paper assignments and exams to check up on their students, I’m a bit too obsessive for that approach.  One solution is to offer brief quizzes, possibly in a multiple-choice or word identification format.  In the physical classroom, it’s easy enough to pass out and collect the occasional quiz, and with technology-assisted courses, both on-site and online, there are usually tools built in to whatever technological platform one is using, such as Blackboard or Angel.  However, I find the latter difficult to work with.  Furthermore, I’m not aware of any such platform that takes advantage of the collaborative capabilities of Web 2.0 in the way that, say, wikis do.   It’s nice to be able to work with other instructors to refine and improve our diagnostic tools, but I don’t know of many good online tools for that purpose.
So I was pleased to learn of a tool that Google has recently and quietly announced, called Oppia.  At its most basic, Oppia allows one to create web-based testing tools and learning modules quickly and easily, so long as those tools can be graded according to an easy set of rules (e.g. multiple-choice questions, ID questions, the ordering of events along a timeline, but not essays).  However, one can also add more layers of interactivity.  I see this as useful in a couple of different scenarios.  If a student answers a multiple-choice question correctly, in other words, that can prompt a second and more difficult question that provides a better sense of how much the student knows, or which can be used to offer extra-credit unavailable to students who failed to answer the first question correctly.   (Many computer-based assessment tests, such as the GRE, operate on this principle, of course, which boosts their accuracy.)  Furthermore, if a student inputs a partial sense of the answer, the instructor-programmed learning module can prompt the student to be more specific; this potentially resolves one of students’ most frequent complaints, i.e. expecting full credit for an incomplete answer.
The second advantage of Oppia is that it allows educators to collaborate in designing testing and learning modules.  I’m a big believer in online collaboration, and I think it’s particularly helpful for teachers confronted with the challenge of covering a huge amount of course material.  I always find myself realizing after the fact that I hadn’t covered a particular important topic, at least not in the depth I’d wanted.  Being able to share and build off existing testing tools allows us as educators to avoid such oversights, and to overcome the biases inherent to our own training and research interests.  It also allows us to individualize our courses in ways that are discouraged by the online tests provided by textbook publisher websites.  In effect, Oppia allows us to do for certain formats of tests what collaborative projects such as The American Yawp are doing for textbooks.
I haven’t yet designed any Oppia modules, but I plan to incorporate the platform into my teaching in the near future, and will be sure to provide an update when I do.  In the meantime, I’m curious to know what others think of Oppia and other web-based collaborative tools out there for designing learning and testing materials.

Leave a Reply

Your email address will not be published. Required fields are marked *