Skip to main content

Rethinking Formative Assessment

We've seen an increased significance placed on formative assessment in the legal academy. Standard 314 of the ABA Standards requires that law schools use both formative and summative assessment methods in their curriculum. Its rational for doing so is "to measure and improve student learning and provide meaningful feedback to students." The ABA defines formative assessment methods as "measurements at different points during a particular course or at different points over the span of a student's education that provide meaningful feedback to improve student learning."

Those of us in the legal research instruction business are no strangers to formative assessment. We are leaders in this in the law school curriculum, with rarely a class going by in which students do not practice their skills. Lately, though, I've been wondering whether I'm going about formative assessment in the way that will best provide meaningful feedback to students. In the mandatory workshops we put on for our first year students, we focus intensely on Rombauer's method--research as a process, not a mere gathering skill. More often than not, however, our ungraded formative assessments, while disguised as an open problem because we start them off with a client-based fact pattern, are really designed to lead them from source to source--effectively a treasure hunt that takes them through the process. Now, I'm not entirely opposed to treasure hunts as a tool to teach the mechanical side of research. But, if we purport to be teaching our students process and analysis, we need to let them engage in that process with ungraded assessments where we are not directly telling them which sources to use and in which order. Otherwise, their first opportunities to truly engage in the research process openly is on their graded open memos--which, at least in my students' case, are being graded by their legal writing professors, not the legal information experts who taught them the four-step process in the first place. As such, the focus of the feedback is spread across any number of topics--technical writing, style, and more, in addition to which sources they found. Students are not getting meaningful feedback centering primarily on the research process they used.

Students need practice conducting open research problems without the pressure of a looming grade. Otherwise, students fixate on finding the "right" answer or sources, rather than engaging in and absorbing the process. Without being worried about producing a graded written product, students are able to take in the research process fully because their cognitive loads are lessened. This will also help students view research as more than a rote, mechanical task to gather authorities, as they're able to isolate the skills necessary for successful research from those needed for successful writing.

In my instance, this means creating assignments that students may not be able to complete in the allotted 50-minute time period we have--or at least that we don't have time to review in that short time. This may require buy-in from the legal writing professors to allow us to give students homework, perhaps in the form of participation points, because if an assignment isn't sanctioned by the professors who are responsible for their grades, the students may not take it seriously. We must be willing to have conversations with our legal writing colleagues about creative ways to incorporate ungraded, formative assessments into the curriculum (in those situations where we are not their "grading" professor). We need to be upfront with them about what exactly we are trying to teach our students--process and analysis, and why this particular type of assignment is a necessity.

This also requires us being willing to review assessments from our entire first year class, which may be a challenge depending on the number of instructional librarians you have and how much other for-credit and non-credit teaching they might be doing. One way to get around having to collectively grade ~140 1L assessments might be to create a video walking through the research process you'd use for a given problem. But this isn't a perfect solution, as there are often multiple ways to move through a research problem successfully. I always lean toward wanting to give individualized feedback based on students' attempts--even better if that feedback is in a conference so I know students are absorbing it. Still, the most important point is that students are 1) getting a chance to practice open problems and 2) they are receiving some kind of meaningful feedback. After all, meaningful feedback is our best way to ensure that our students will be able to conduct research successfully in practice. It's also our best way to demonstrate to our students that research is a process that requires critical thinking.

Popular posts from this blog

Making "Thinking Time" for Curricular Development

In academia, we often hear faculty discuss the need to find time to write.  I've recently been reading Helen Sword's Air & Light & Time & Space, in which she discusses the need for those very things in writing.  In the first chapter, she notes, "[A]cademics talk constantly about making time, finding time, carving out time to write. We fantasize about having more of it, and we bemoan our chronic lack of it."[1]

I find the same is true for developing and assessing curricular programming. As librarians, true public servants, our profession is rooted in our service to others. Even if we are not scheduled for the reference desk or to attend a meeting, our "availability" is our calling card and in some cases our badge of honor.  It's expected that we will stop what we're doing should a patron come to our door or call on the phone.

The problem is that without free time to think, to think uninterrupted, we cannot innovate.  We keep with the stat…

Cold-Calling in the Law Classroom

In the years I've spent in legal academia, both as a student and a teacher, there's never been a great deal of discussion about the efficacy of cold-calling students. In the past year, however, I've heard arguments by faculty members that cold-calling works as a form of formative assessment for class, despite the fact that only one student is answering a given question. Recently, however, as I've been exploring brain science, I've been wondering about how much learning actually takes place inside classrooms where cold-calling is the primary method of instruction. Are we making learning more difficult than it needs to be?

I've written briefly before about the effectsof retrieval. Retrieval is the stage of the learning process in which students access information from their long-term memories.[1] Regular practice retrieving information leads to both long-term retention of information (basically, because we have had practice finding information in the knowledge st…

Intuitions About Teaching and Learning

Most learners rely on their own intuitions when selecting their study strategies. The same is true of teachers; we look back to our experiences as both students and teachers in deciding which strategies to use with our students. But how reliable are these intuitions?

It turns out, not veryreliable.

When relying on intuition, both students and teachers can select strategies that may not help learners be successful. We can see this in the tendency of college students to see reading and re-reading their textbooks and notes as the best way to learn.[1] Studies overwhelming demonstrate that re-reading takes more time on the part of the learner, but does not improve students' abilities to retain information in the long term.[2] To learners, however, re-reading feels good. As Yana Weinstein and Megan Sumeracki describe it in their book, "The more we read a passage, the more fluently we are able to read it. However, reading fluency does not mean we're engaging with the informatio…

Embracing Learner-Centered Pedagogy

Most educators pride themselves on putting our students first and try to make teaching decisions with our students' best interests in mind. But, what does learner-centered teaching really mean?

In their 2017 book, Learner-Centered Pedagogy: Principles and Practice, Kevin Michael Klipfel and Dani Brecher Cook set out to answer this question--and how it can be applied to teaching in a librarianship context. When asked to articulate what having a learner-centered approach means, most point to individual exercises or classroom techniques they employ or try to avoid, but are unable to describe the philosophy as a larger concept.

Ultimately, Klipfel and Cook's definition of learner-centered pedagogy is "who we are as people matters."[1] They explain it in further detail as: "Our conception of learner-centered pedagogy encourages library educators to encounter the learner as an individual with personal interests, preferences, and motivations, and uniquely human set of …

Spaced Repetition & Interleaved Practice in Legal Research Instruction

Researchers refer to single-minded practice as "massed practice." This concentrated practice is thought to embed skills into memory. Unfortunately, while many students and teachers believe this to be the best way to learn, research doesn't support that idea. The problem with massed practice is that it is often accompanied by quick forgetting. Practice is important, but it is considerably more effective when it's spaced out--there's better retention and mastery.

It can be tough to convince our students of the benefits of spaced repetition. As Brown et al. point out in Make It Stick: The Science of Successful Learning:

 "[T]hese benefits come at a price: when practice is spaced, interleaved, and varied, it requires more effort. You feel the increased effort, but not the benefits the effort produces. Learning feels slower from this kind of practice, and you don't get the rapid improvements and affirmations you're accustomed to seeing from massed practi…