Recently, I have increasingly heard, and used, the word rubric, and have been thinking about use of rubrics in collection development. As I started writing this blog post, I realized I wasn’t sure exactly what rubric meant, so I took a quick look in the Oxford English Dictionary and found, to my surprise, that rubric can be used as a noun, adjective, and verb. The definitions are varied and quite fascinating and I encourage you to take a look. The definition that comes closest to what I had in mind is A.1.b., which refers to setting rules (a word I also looked up). For the purposes of this blog post, I am thinking broadly about a rubric or something similar – perhaps a checklist – as an organized setting of priorities and framework for evaluation.
How does this apply to collection development or more exactly, collections at Syracuse University Libraries? Collection development is made up of (among other things) a steady stream of decisions— whether to acquire a specific title, or bundle of titles, or whether to renew a subscription or add a subscription, whether to buy one resource or another. There are way too many available titles to add them all, so decisions have to be made about which ones. How to decide? Sometimes there are compelling reasons, such as price, which make decision making somewhat easier. But absent the question of price, what criteria best inform decision making— to ensure we are making good choices and meeting needs of the user community? Further— do the criteria differ depending on the format or discipline? In other words, would the criteria we use to evaluate and prioritize, for example, journal subscriptions differ from the way we would evaluate news sources, or video, audio, data, or digitized primary sources (to name a few examples)? I suspect the answer is yes.
With so many interesting digitized historic documents, news sources and other materials becoming available, we are faced with the question of how to choose just a few, as well as the question of whether new resources are better than ones to which we currently subscribe— in other words, are there titles we should cancel in the interests of newly available ones? And if so, which can we do without? In the ideal world, something to help with complex decisions would be useful— something that takes into account university research, curricular needs, initiatives, and user experience, as well as information about current subscriptions and resources we want to consider.
I appreciate the work of Duncan and O’Gara, whom I heard at the 2015 Charleston Conference, and who write about decision making and development of a collections-related rubric in a 2015 Performance Measurement and Metrics article, Building holistic and agile collection development and assessment. Duncan and O’Gara’s work shows the challenges and complexity of gathering meaningful data to inform decision making. Their work includes incorporating information about their university into the decision making process.
I can envision a 3D checklist or other framework for collections decision making across formats and subject areas. I welcome your thoughts about what might be on each of the dimensions— in other words, what criteria you find important. I appreciate the thoughts I have heard so far from Department of Research and Scholarship colleagues who have worked on evaluative criteria for ebooks, videos, and news sources (especially Michael Pasqualoni for the latter) and who have contributed thoughts via casual conversations. In one recent discussion with colleagues Bonnie Ryan and Lydia Wasylenko, we talked about a dream world in which we had more flexibility to try new resources beyond a typical trial period and with numerous vendors, in order to meet some short term needs and explore more content in the long run. We realize that doing so creates challenges in terms of licensing, and technical set up – and we would need a flexible spending pool of funds. Perhaps there are other creative solutions— whether related to rubrics or not. I welcome your ideas and thoughts about what is important to you in evaluating existing or possible collection additions.
If you want to read more about rubrics, you might try Sage Research Methods Online and if you want to read more about decision making in collection development, you can find articles in Library, Information Science and Technology Abstracts, Library and Information Science Abstracts, among other sources.
Duncan, C. J., & O’Gara, G. M. (2015). Building holistic and agile collection development and assessment. Performance Measurement and Metrics, 16(1), 62-85.
“rubric, n. and adj.” OED Online. Oxford University Press, December 2016. Web. 12 February 2017.
“rubric, v.” OED Online. Oxford University Press, December 2016. Web. 12 February 2017.