This page has been accessed times since 9 February 1999.

ALCTS Heads of Technical Services in Large Libraries

Midwinter Meeting (Philadelphia, PA)

January 29, 1999
9:30 a.m.-12:00 noon
Philadelphia Convention Center
Rooms 202A - 202B

Recorded by Judith Hopkins, University at Buffalo
ulcjh@acsu.buffalo.edu
http://www.acsu.buffalo.edu/~ulcjh

For the text of the Round Robin on issues of concern to these institutions, which was distributed via the Big Heads electronic discussion list in the weeks prior to the Philadelphia, PA meeting see http://www.acsu.buffalo.edu/~ulcjh/bh199rr.html

AGENDA

  1. Opening remarks, introductions
  2. OCLCs CORC (Cooperative Online Resource Cataloging)
  3. Special Projects Updates; discussion
    1. Berkeley -- Technical Services Training Program (Lee Leighton)
    2. Cornell, et al. Analysis of Technical Services Costs (Christian Boissonnas)
  4. Short Break
  5. Discussions for cooperative action
    1. LC ILS implementation impacts on us; what are we tracking? Examples: Z39.50 access to LC databases; impact on cataloging (etc.) output, pinyin conversion; PCC initiatives; authorities. What else?
    2. Wade-Giles to pinyin conversion
    3. Cataloging and/or analyzing aggregator databases
    4. Other items for cooperative action?
  6. Audience input on Technical Service issues and on agenda for Annual meeting
  7. Adjournment

MINUTES

Meeting Chairperson, Lee Leighton (Berkeley)

  1. Opening remarks, introductions

    Bob Wolven announced Carol Mandel is leaving Columbia to become Director of the library at NYU; on her departure he will become Deputy University Librarian.

    Lee Leighton announced he is resigning as Co-Chair. He nominated Judith Nadler to succeed him. She was elected unanimously.

  2. OCLCs CORC (Cooperative Online Resource Cataloging)

    http://www.oclc.org/oclc/research/projects/corc/index.htm
    CORC is a research project exploring the cooperative creation and sharing of metadata by libraries. Terry Noreault from OCLC Office of Research will describe the project goals and will focus on discussion issues for us as a group representing the largest US libraries. We also will ask that he address any projected relationships between CORC and the CLIR Digital Library programs. Group discussion.

    Dr. Noreault said they hope to involve 100 libraries in the project; so far 12 have signed up and 30 more have expressed interest. Phase 1 will focus on such questions as: What are the rules and standards involved in providing good descriptions. The second phase will study what uses can be made of the database.

    What are libraries currently doing to provide access to electronic resources? Either:

    • Nothing
    • Proving web page access
    • Loading records in OPAC
    • Developing pathfinders to help patrons find most valuable resources.

    There are several problems with pathfinders: Duplication of effort as different libraries catalog the same resources, lack of currency as links change and web pages change in content, and the sheer rapid growth in the number of electronic resources. He estimated that there are from one to two million collections that need to be described. That would be beyond the capability of any single library but with many libraries working cooperatively it could be done. He suggested a sort of web approval plan as a solution.

    We need tools that will help libraries provide pathfinders that will integrate web and OPAC access to resources. The solution that OCLC envisages is to build a cooperative database of high quality records using two standards, MARC (AACR2) and Dublin Core and to map between those standards. The richness that MARC provides is needed to provide full descriptions.

    The project includes provision of authority control and use of some automatic cataloging (automatic cataloging doesn't work very well now but the output of such a program can be a good starting point for humans.)

    What are the CORC objectives? To build pathfinders that will provide database access and to find ways to create records using metadata.

    All participants will get a copy of the full database. The starting point of the database will be the 170,000 records from Intercat and Netfirst. The project is scheduled to end January 2000.

    He then asked for questions.
    Bob Wolven (Columbia) : What parameters do you see for building the CORC database that will be useful for collection development functions? The answer was: Subject description and information on the nature of holdings

    Duane Arenales (NLM) said she was of 2 minds about the value of this project; she wondered if it duplicates existing work of the Program for Cooperative Cataloging (PCC). The answer came partly in the form of a question: Can we get metadata providers to provide more information? We need to find out how durable these collections and records are. We need to reduce cost of cataloging.

    Brian Schottlaender (UCLA) expressed himself as less sanguine about getting content providers to provide descriptions. He noted that Dr. Noreault was describing access points primarily. Providers are less interested in content standards than in coding standards. Lots of people at UCLA are creating pathfinders but without opac records for those pathfinders many potential users are not getting access to them. He referred to the California Digital Library.

    John Lubans (Duke) asked if libraries will be able to take the CORC records and put them in the local catalogs; the answer was Yes.

    Beth Warner (University of Michigan) asked if libraries would be able to export MARC records from local catalogs to CORC? Again the answer was Yes; if fact it is already being done.

    Sally Sinn (NAL) asked to what extent would CORC be involved in record sharing among utilities. Is OCLC looking at ways to make the sharing and notification of new and updated records among participants better? The answer consisted of two points:

    • A mechanism to notify people that records have changed.
    • A test of whether links are broken. How do we notify people of things, other than records, that are changed?

    Mike Kaplan (Indiana University) said he was interested in ejournals and aggregators. He asked whether any ejournal creators have been asked to be involved? The answer was, No, not in this phase but they hoped to do that in the second phase.

    Joan Swanekamp (Yale University) asked whether OCLC planned to turn this into self-sustaining product after the CORC Project ends? In response Dr. Noreault said OCLC will test to see whether CORC turns out to be useful and, if so, whether they can find a model that will be self-sustaining so that OCLC can turn it into a product.

    Carol Diedrichs (Ohio State University) asked for comments on changing records. What is needed is a model by which we can all see the most updated version of records. Dr. Noreault said that they have to figure out how to move to a position where we have a shared catalog but also local versions.

    Judith Nadler (University of Chicago) said that CORC should be thought of as more of a learning project than a means of record creation.

    In preparation for CORC participation at Chicago the following goals had been set up:

    1. To learn and gain experience in creating records using multiple metadata schemas and assess the desirability of and mechanisms for importing records from CORC into local OPAC.
    2. To influence the development of standards.
    3. To determine what standards are best, determine how standards inter-operate with each other, and develop and document best practices for applying any particular standard.
    4. To develop cost mdels for non-MARC document description.
    5. To explore mechanisms for persistence of object identifiers (preferably multiple mechanisms) and assess issues of maintenance in a cooperative environment.
    6. To assess interoperability between MARC, the Dublin Core, and other schemas and the usefulness of different schemas for different users. Technical Services will work closely with Reference on this aspect.
    7. To test organizational alliances in a metadata environment -- alliance of catalogers with bibliographers, alliance of catalogers with access services, alliance of catalogers with systems, etc.

    A question that has come up so far during the Chicago project relates to the sharability of records. We are committed to sharing our records nationally, and records created in conjunction with NEH-funded projects have this as a requirement. Can we export CORC records to OCLC WorldCat? Can we export them to RLIN?

    Karen Hsu (New York Public Library) : Do you have mechanisms to map records from WorldCat to CORC; can you do also do the opposite? The answer was that it probably would be possible.

  3. Special Projects Updates; discussion

    1. Berkeley -- Technical Services Training Program (Lee Leighton)

      All branches perform some technical services functions though not necessarily cataloging. Since not all branch staff are equally trained Berkeley introduced a program to bring all to the same level. They plan to do an overview of monographic cataloging, serials cataloging, ordering, preparation of materials for storage, replacements; and public services interpretation of records. The program involves offering multiple sessions of the same topic.

      Bob Wolven (Columbia University) asked whether the training documents are available on the Web. Leighton said that they will be; Berkeley is still at the stage of creating documentation but making them available on the Web is also one of the program goals.

      Jeffrey Horrell of Harvard University, a very decentralized environment, asked how Berkeley planned to avoid duplication of work. Leighton said they had reached no conclusions on that point. They lacked the space in Central Technical Services to move branch staff there. A great deal of independent thinking goes on in the branches.

      ? Someone asked where Berkeley performs monographic ordering? The answer was CTS. To the question of where serials are received, the answer was mixed. For the more easily controlled titles, receipt is decentralized; the rest are received in CTS.

      Arno Kastner (New York University) commented that serials checkin is a problem in a decentralized system; there is a need to upgrade some staff person to serve as the point person to answer serials-related questions. Leighton noted that claiming at Berkeley is done in CTS.

      Judith Nadler (University of Chicago) asked whether Berkeley was working on deciding what work needs to be done? Leighton responded that they had done that first. It had been essential to determine what steps were really necessary since Berkeley had lost 30% of its technical services staff.

      Barbara Stelmasik (University of Minnesota) asked who measures performance? Leighton answered that performance standards had been mounted on the Web. Branch supervisors measure the performance of their staff but often in consultation with CTS.

    2. Cornell, et al. Analysis of Technical Services Costs (Christian Boissonnas)

    Five libraries are working on this test project: Cornell, the University of California at Santa Barbara, Iowa State University, Vanderbilt, and the University of Missouri-St. Louis. At last summers ALA Boissonnas had reported that the group had just about completed work on definitions of cost centers. They have now gathered data for three randomly chosen sample weeks. Since summer the participants have talked about how to incorporate overhead data and are making progress. Software to record and analyze data is still being developed. It is slightly behind schedule. The module dealing with employee data is mostly completed, and the one used for recording weekly data is in its second version. The module which generates the reports is currently being coded, and the group has not yet decided how many and what kinds of reports will be needed. Boissonnas did not know when the software will be ready for distribution or how it will be priced. He also said that, by summer, he will have analyzed Cornell's data for at least four sample weeks and begun to develop new baseline data for times and costs.

    Duane Arenales (NLM) : One factor that may influence how long it takes different libraries to perform different functions is what Integrated Library System (ILS) they use. In her experience commercial library systems do not make it easy to collect time and cost data. Perhaps if we had some generally agreed upon desired data it might be useful in discussions with vendors. To Boissonnass response that he is more interested in analyzing time data Arenales said you need ways to capture the data before you can analyze it.

    Brian Schottlaender (UCLA) added that it was important to come up with common definitions and the fact that the group had been able to do that was a major accomplishment.

    Judith Nadler (University of Chicago) said that the software will be a good management tool to help with such problems as determining which materials are the best candidates for outsourcing.

    Lee Leighton (Berkeley) asked when the information about the project will be available to other libraries. Boissonnas said that, once the five libraries had gone through it themselves, had input data, and had generated reports, he could see no reason why there could not be greater participation in the project, but this would really be up to the group to decide.

  4. Short Break

  5. Discussions for cooperative action

    1. LC ILS implementation impacts on us; what are we tracking?

      Examples: Z39.50 access to LC databases; impact on cataloging (etc.) output, pinyin conversion; PCC initiatives; authorities. What else? (Leighton)

      Beacher Wiggins (LC) : There will be some negative impact on output, through October 1, 1999 when the ILS is scheduled for full implementation, and beyond. LC is not sure what the learning curve will be after the October 1 implementation date. LC has told Congress that the numbers will not likely start climbing until after October 1, 2000. LC will concentrate on current receipts rather than arrearages except in some specialized areas such as CJK. The period to eliminate the print arrearages has been extended to Sept. 30, 2004 while the period for dealing with the Special Formats arrearages (approx. 18,000,000 items) has been extended to June 30, 2007.

      Bob Wolven of Columbia wondered what we could do to minimize the impact of LCs productivity drop in our libraries. Brian Schottlaender (UCLA) said that the drop in productivity for the Program for Cooperative Cataloging (PCC) is important for all other PCC libraries as many of them are also implementing new systems; he is looking at a 20% drop in productivity at LC. Sally Sinn (NAL and PCC Chair) said that PCC isn't forecasting any decline in productivity, based on members' estimates of bibliographic record contributions for FY 1999, but there has been a reluctance for new libraries to join PCC while they are implementing a new ILS. Judith Nadler (University of Chicago) said we should not under-estimate the drop in productivity because of a new ILS.

      Bob Wolven asked if the LC drop in productivity would impact the distribution of MARC records to OCLC. Beacher Wiggins (LC) said that while the number of records available to be distributed will drop he did not anticipate any decrease in the speed with which new records will be ready to be distributed. LC's Cataloging Distribution Service will be able to pick up newly created records as of Cataloging Day 1 in June 1999. Duane Arenales commented that NLM anticipates a 5-month delay in distributing records. Carton Rogers said that the University of Pennsylvania had gotten back to initial productivity levels but has a one and a half year backlog in loading records to its utility.

      Bob Wolven (Columbia) asked what is the relationship of ILS implementation to LCs overseas offices? Beacher responded that they are impacted but that impact should be transparent to users of records. The overseas offices are not getting client-server software loaded.

    2. Wade-Giles to pinyin conversion.

      LCs plans go only so far in helping our local catalogs; are there cooperative initiatives that we as a group should arrange to minimize impacts to our catalogs and authority files? Does someone want to take a lead on this issue for ALA Annual? (Leighton)

      Beacher Wiggins said that LC is looking at spring or summer 2000 for pinyin conversion, after the implementation of the ILS. Its impact on subject headings, authorities, and classification will affect Big Heads libraries. Any record with a Chinese heading will be affected. LC is considering asking the PCC members to cooperate in this work.

      Judith Nadler (University of Chicago) said she had not seen anything about OCLCs commitment to pinyin conversion. The Big Heads group should find out what their plans are and to lobby for action.

      Mike Kaplan (Indiana) asked if LC had a feel for the number of headings involved in the conversion to pinyin. Wiggins said they don't have a grand total but there are 916 name authorities that include Peking alone and over 2,800 associated bibliographic records.

      Beth Warner (Michigan) asked about cooperation with libraries in China to adopt their standards. Wiggins responded that the pinyin romanization scheme is based on what Chinese libraries use. Otherwise there is little cooperative effort with Chinese libraries.

      Bob Wolven (Columbia) said that there would seem to be real potential for automated assistance in authority control conversion. Is anyone working with vendors who provide automated authority control on the impact that conversion to pinyin will have on their systems and what their plans are to deal with it? Lee Leighton said that Berkeley planned to adopt pinyin on day 1 of LCs implementation. They hope to have software that can search both Wade-Giles and pinyin romanizations.

      Karen Smith-Yoshimura (RLG) said from the audience that RLG plans to produce a list of all headings that will be affected by conversion from Wade-Giles to pinyin; RLG will make the list available as an FTP file free to anyone who wants it. This weighted list of headings, with the number of times the headings have been used on bibliographic records, will help to determine which authorities to convert first.

      Mike Kaplan (Indiana) suggested that Big Heads continue discussing this topic at the ALA summer conference in New Orleans.

    3. Cataloging and/or analyzing aggregator databases

      Any practical scheme will require cooperation from our institutions. (Bob Wolven Columbia)

      Bob Wolven started by asking what exactly is meant by aggregator databases? There is no uniformly perceived answer even within Columbia. What vision do we want to accomplish by cataloging aggregator databases? Various models have been proposed, e.g., some form of the multiple versions approach with all formats listed as part of one record, different levels of fullness of records, aggregating sub-components of these databases, etc.

      Christian Boissonnas (Cornell) said that Why we want to do something is an important question to ask. It is not clear that anything is jelling yet at the directors level but is jelling at catalogers level.

      Karen Calhoun (of Cornell) said from the audience that Ruth Haas of Harvard is chair of the CONSER Task Group on Access to Serials in Aggregators which is working on the cataloging of aggregator databases. The Task Group has done two surveys; the first assessed the feasibility and ease of providing title level access to titles in full-text databases such as UMI's Proquest, EbscoHost, Lexis-Nexis, etc. The second survey followed on the first to assess the need for access at the title level, the most desired forms of access, and willingness to cooperate. Sixty-two libraries responded, 95% of which license full-text aggregator databases. Access methods currently in use (respondents could choose all appropriate methods) were:

      • 65 % of respondents are using a single record
      • 60% are using web lists
      • 42% are using separate records

      As to the desired form of access (respondents had to choose one):

      • 31% favored single records
      • 29% favored separate records
      • 23% said None of the above (Half of these said they chose None of the above because they wanted to choose more than one approach, with lists, single records, and separate records all playing a role)
      • 10% favored using lists on the web

      Combining those who responded either Single records or Separate records plus the half of the None of the above responses who wanted multiple approaches means that 71% of the responding libraries wanted records in their OPACs so as to provide a single point of access to ALL library resources.

      There is a great deal of interest in collaborating to solve the problem of providing access to electronic journals within full-text databases. Over half of the responding libraries said they would be willing to work with CONSER libraries to create and maintain record sets, and over half were willing to work with vendors to create metadata for these titles. Almost 75% said they wanted to purchase sets of records.

      As a next step the Task Group will be meeting to discuss how it might work with vendors.

      Judith Nadler (University of Chicago) asked whether there was a way to assess how respondents came up with choices? Karen said that the technical services respondents had been asked to check with their reference staff. Many respondents provided rationales for their choices.

      Sally Sinn (NAL) commented that libraries don't have systems for dealing with multiple versions and so, pragmatically speaking, we are constrained by the tools we now have; people may say they prefer separate records because they lack workable systems for providing data relating to multiple formats on a single record.

      Duane Arenales (NLM) commented that no-one has promoted multiple versions more and longer than NLM but they are a bit nervous about dealing with electronic journals and how much they diverge from the print version and how often they change. How can one determine if both versions are the same work?

      Beth Warner (Michigan) asked has anyone thought beyond description to how to show holdings? Karen Calhoun said from the audience that the surveys had not asked questions about maintenance but had gotten lots of comments on that topic. One suggestion: preparation of a list of desiderata to present to vendors; people are concerned. There will be more discussion of this topic at the Big Heads meeting in summer 1999.

    4. Other items for cooperative action?

      None were presented.

  6. Audience input on Technical Service issues and on agenda for Annual meeting. We encourage audience comments.

    Martin Joachim (Indiana) asked about changes in the AACR2 rules dealing with British titles of nobility. Brian Schottlaender (UCLA) said the proposals are a result of USMARC AND UKMARC harmonization. One proposal is that when headings are entered under personal name surnames, that titles of nobility not be added. CC:DA will be discussing the British proposals at its Saturday afternoon meeting.

    Mary Charles Lasater (Vanderbilt) asked if anyone was addressing the question of aggregator databases of a more monographic nature. No one had. According to Lasater, the vendor-provided records are MARC-like but there has been no adherence to AACR2 for form of headings. The result is that the records will require a horrendous maintenance job.

    Someone from the audience asked about intelligent cataloging agents and the application of programs against content to develop rules. (I did not understand the question. JH) Bob Wolven (Columbia) said that someone will be applying content analysis to records attached to full text databases to provide subject analysis of some sort.

  7. The meeting was adjourned at 11:55 AM.