Calling CABs: Obtaining 3,000 required and recommended readings each semester

file

As part of the process of optimizing the alignment of our book collection with the teaching and learning needs of the colleges, the Claremont Colleges Library launched a service designed to provide students with improved access to approximately 3,000 required and recommended readings each semester known as Course Adopted Books (CABs). 

Using a list of books requested by faculty from the campus bookstore, the Library ensures that a copy of every course reading is owned by the library and available to students.

If not already owned, we purchase these books. And we reduce the already existing holdings to course reserves loan, ensuring that they will not be available for resource-sharing requests from other libraries or be checked out for the entire semester. Our goal is to have 80 – 100% of all books that faculty requested for the bookstore available in the library by the first week of classes. 

The CABs program is very popular with students. The high staff time requirement for the information gathering and the collection de-duping can be a challenge.    

To reduce the challenge and expedite the process, we created "cabbie," a Python script. Cabbie de-dupes the bookstore list and then passes ISBNs from it—including acceptable alternate versions—to the WorldCat Search API to check our holdings. The Search API enables FRBR grouping by default, which proves to work well for these alternates.

file

At times, ISBN match is not sufficient; perhaps our copy has no ISBN. We may have copies of literary works that precede the ISBN system—1920 Shakespeare prints are still quite valid! We want to know about those so we can make an evaluation. Cabbie re-searches anything that fails to make an ISBN match. Instead of ISBN, it uses a partial title and partial author match. Once again, the API provides an easy mechanism.

file

Cabbie has contributed to the success of the CABs program by reducing the Acquisitions staffs’ workload. Cabbie's de-dupe can reduce the store list by 20%. Of the remainder, it can usually find 60% via ISBN match and 20% via title/author. This part of the file processing is entirely automated and requires just a few minutes.

To recap, cabbie does the following:

  1. Reads and de-dupes a file of ISBN numbers.
  2. Sends them to the WorldCat Search API. The returned values are divided into HDC matches and non-matches. 
  3. Writes direct ISBN matches to a file with additional data from WorldCat: the OCLC number, most-held ISBN, LC Call Number, title, author, and publisher, including date. These are coded as “HDC.”
  4. For non-matching records, uses the first two words of the title and the author's last name to search again. 
  5. Writes these title/author matches to the same file, coded "HDC-ish" to provide an easy filter for our Acquisitions staff. The idea is that matches based on a partial title and author last name need evaluation by our staff.

Next steps:

  • Add a file management web-based GUI.
  • Continue to optimize the title/author matching parameters.
  • Add logic for e-books, e.g., search for an e-book even if ISBN search finds a perfect match.
  • Incorporate the WorldShare Management Services (WMS) Collection Management API to obtain more granular holdings data.

Cabbie code is open source so please feel free to try it. Contributions (pull requests) are more than welcome!

References

[1] “skome/cabbie,” GitHub. [Online]. Available: https://github.com/skome/cabbie. [Accessed: 09-Dec-2016].

[2] “A Python module for interacting with the experimental OCLC Worldcat Live API.,” Gist. [Online]. Available: https://gist.github.com/edsu/4730261. [Accessed: 09-Dec-2016].

  • Maria Savova and Sam Kome

    Claremont Colleges Library