« LibGuides, a web 2.0 library knowledge sharing system | Main | Investigating tools »


Over the last couple of weeks, the RRC has been collecting information about how reference statistics are gathered and maintained around the UL system. Members of the collaborative have talked with librarians from nearly all reference service points, and with their help, have pulled together an inventory of the many different logs used to record reference exchanges. After completing this scan, the RRC has recognized the need for a system of collecting consistent data across the Libraries. Along side such a system would be a simple set of guidelines to standardize, for example, a “reference� question versus a “directional� question.

At our meeting on June 20th we talked almost exclusively about the “data elements� we should, or would like, to collect that will allow us to 1) report to ARL, and, 2) evaluate our reference service. Reporting to ARL requires very little data collection—noting the occurrence of a reference question, and distinguishing reference from directional questions. Such data also do not provide any evaluation of reference service, other than to track major trends.

In order to truly investigate the value of reference service, including its quality, its success with patrons, and its cost-effectiveness, we need more data about reference activities. Some of these data will result from a users’ needs analysis, which the RRC will conduct. And some should come from more in-depth and consistent capture of the nature of the reference questions patrons are asking, the status and affiliation of our patrons, what referrals are made, etc.

So our challenge, at this point, is to ask: what kind of tool we can design to collect data about our reference service in a standardized and effective method? And, what data elements shall we collect? Shall we collect them all the time? Shall we require collection of these data elements by all service points?

This conversation will be on-going, but here are a few concepts that have emerged in our discussion:

1. The system should be web-based, so it can be accessed from anywhere.

2. The system should be simple. It would, ideally, be designed to have many elements pre-populated on a screen, so as not to impose an undue recording burden.

3. There should be some basic data elements required, such as date, service point, reference or directional question, duration of exchange.

4. The system should be flexible. There should be the optional ability to collect more than the basic elements, allowing each unit to collect any kind of unique information they want to collect. These more complete data elements might become required elements several times throughout the year for sampling purposes.

We welcome comments via this blog. Please add information or suggestions!


knowing more about the kinds of questions being asked would be valuable for collection development, website improvements, and staff training.

urge collecting data by subject categories or types of questions similar to research pertaining to chat and email reference.