Testing the Value of Librarian Interventions Is No Easy Task

A recurring question that Blended Librarians will ask is whether or not their work is having an impact on student academic success. This is part of the larger conversation about the impact academic librarians have on student learning. Do our contributions make a difference? How so?

The challenge is figuring out how to connect these interventions to the achievement of student learning outcomes – or other indicators such as grades, retention or persistence to graduation.

Among the most common librarian interventions are classroom instruction support and direct consultations with students. There are others. Blended Librarians are known to design research assignments collaboratively with faculty. They may create research guides or tutorials that are used by students to support their research activity. Blended Librarians may embed themselves in learning management system courses to participate in course discussions or to be more readily available for direct student support. And there may be any number of ways in which a Blended Librarian can introduce educational technology or applications in support of student research and learning how to conduct research more effectively.

Several years ago a few colleagues and I wanted to learn if our library research guides had a beneficial impact on students. Did they help students achieve better grades in the course? So we devised a research project aimed at determining whether or not they did make a difference. To do that we enlisted two different sections of the same course and conducted a quasi-experimental study involving a research assignment. The experimental group was exposed to the research guide and a librarian came to the class to introduce the guide. The control group had access to the guide but it was not promoted to them. The results were inconclusive. The greatest challenge was the inability to control all of the actual and potential factors that interfered with the experiment.

What we learned is that educational research is quite challenging – and we hardly even got to the point of connecting outcomes of the research assignment to student grades or overall academic success. Back then, if we had access to this recent post by Marsha Lovett on “Five Common Pitfalls in Educational Research (and how to avoid them”) things may have turned out differently.

For one thing, Lovett writes that:

In real-life educational settings, it’s a nontrivial challenge to conduct a true experiment with random assignment to conditions or, failing that, to design a quasi-experimental study with adequate control..[but].. if you want to conduct research comparing instructional conditions and you want to generate meaningful conclusions, you need more than a solid experimental design; you also need to avoid these all-too-common pitfalls.

Here are those common pitfalls:

  • Using final course grades as a measure of student learning – Grades and GPAs are often sought out by librarians as an indicator of the value of library use and interventions, particularly when big data is being crunched (e.g., does coming to the library or borrowing books impact GPA). Simply put – for a course-based research experiment there are too many interfering factors.
  • Don’t ask students to assess their own learning gains – Sounds like something we academic librarians have done all too often, such as asking students immediately after an instruction session if they have gained self-efficacy in conducting research. This strategy introduces loads of student bias in how they report their own learning.
  • Not allowing enough time for the intervention to be integrated into the course or not allowing the instructor time to learn how to use it – I imagine that if we had given the instructors more time to really understand how the research guide could be used by the students in our quasi-experimental project, it could have made a real difference in the way the instructor integrated it into the course. If it’s only there because the librarian put it there, it is likely to be of less value to the student.
  • Students in the “treatment” group may simply fail to use the intervention – In our experiment we built it, but we really don’t know much about why the students didn’t come. There has to be some process or mechanism built into the research to ensure the students will indeed connect with the librarian intervention or else the research is doomed to fail.
  • Be focused on more than just the student outcomes – If all we are looking for is a final grade or a student comment or ranking on their own learning, it’s possible that something important will be missed about the learning process. In other words, we need to find out the WHY in addition to the what.

Too often the prospects of dealing with all the possible things that can go wrong in a library-based experimental research project will cause us to abandon the prospects altogether. Instead we fall  back on crunching big data and looking for correlations between some form of library usage (e.g., borrowing books, logging in to databases, etc.) and grades or we disseminate a survey questionnaire to a big discussion list and ask what everyone else is doing or thinking. While we can quite possibly learn something from those types of research it seems that as library scientists we should be capable of doing something more experimental in nature.

Take a look at Lovett’s recommendations for how to avoid these five pitfalls. As Blended Librarians we do a good job of designing and implementing the type of learning interventions that can help our student achieve academic success. We now need to do a better job of assessing whether or not our efforts are making the difference for our students.

Comments are closed.