Search (233 results, page 1 of 12)

  • × theme_ss:"Formalerschließung"
  1. Byrd, J.: ¬A cooperative cataloguing proposal for Slavic and East European languages and the languages of the former Soviet Union (1993) 0.03
    0.03331879 = product of:
      0.13327517 = sum of:
        0.117856435 = weight(_text_:union in 564) [ClassicSimilarity], result of:
          0.117856435 = score(doc=564,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.6296408 = fieldWeight in 564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0546875 = fieldNorm(doc=564)
        0.015418734 = product of:
          0.030837469 = sum of:
            0.030837469 = weight(_text_:22 in 564) [ClassicSimilarity], result of:
              0.030837469 = score(doc=564,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.2708308 = fieldWeight in 564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=564)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper proposes, as a backlog reduction strategy, a national cooperative cataloging program among libraries with major collections in the Slavic and East European languages and in the languages of the former Soviet Union. The long-standing problem of cataloging backlogs is discussed, including a brief discussion of some of the other ways that have been used to address the problem. The proposal for a cooperative effort is outlined and some of the cataloging issues to be considered are discussed.
    Date
    12. 1.2007 13:22:35
  2. Lundy, M.W.: Evidence of application of the DCRB core standard in WorldCat and RLIN (2006) 0.03
    0.028558964 = product of:
      0.114235856 = sum of:
        0.1010198 = weight(_text_:union in 1087) [ClassicSimilarity], result of:
          0.1010198 = score(doc=1087,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5396921 = fieldWeight in 1087, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.046875 = fieldNorm(doc=1087)
        0.013216058 = product of:
          0.026432116 = sum of:
            0.026432116 = weight(_text_:22 in 1087) [ClassicSimilarity], result of:
              0.026432116 = score(doc=1087,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.23214069 = fieldWeight in 1087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1087)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The Core Standard for Rare Books, known as the DCRB Core standard, was approved by the Program for Cooperative Cataloging for use beginning in January 1999. Comparable to the core standards for other types of materials, the DCRB Core standard provides requirements for an intermediate level of bibliographic description for the cataloging of rare books. While the Core Standard for Books seems to have found a place in general cataloging practice, the DCRB Core standard appears to have met with resistance among rare book cataloging practitioners. This study investigates the extent to which such resistance exists by examining all of the DCRB Core records in the OCLC (Online Computer Library Center) Online Union Catalog (WorldCat) and the Research Libraries Croup Union Catalog (RLIN) databases that were created during the standard's first five years. The study analyzes the content of the records for adherence to the standard and investigates the ways in which the flexibility of the standard and cataloger's judgment augmented many records with more than the mandatory elements of description and access.
    Date
    10. 9.2000 17:38:22
  3. Haynes, K.J.M.; Saye, J.D.; Kaid, L.L.: Cataloging collection-level records for archival video and audio recordings (1993) 0.02
    0.024688955 = product of:
      0.09875582 = sum of:
        0.08333708 = weight(_text_:union in 584) [ClassicSimilarity], result of:
          0.08333708 = score(doc=584,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.44522327 = fieldWeight in 584, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0546875 = fieldNorm(doc=584)
        0.015418734 = product of:
          0.030837469 = sum of:
            0.030837469 = weight(_text_:22 in 584) [ClassicSimilarity], result of:
              0.030837469 = score(doc=584,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.2708308 = fieldWeight in 584, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=584)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Describes a project to create a bibliographic control system for archival quality video and audio recordings of political commercials. The project objectives were: to design a local computer searchable database; prepare item level records for the local database; and prepare collection level records for the OCLC Online Union Catalog. The collection level records are intended to alert scholars, researchers, and other potential users to the existence of the archive and to direct them to it for more powerful item level searching in the local database. Some of the cataloguing problems discussed are: choice of cataloguing tools and MARC formats; organization of the collections around the political candidate, and name authority
    Date
    12. 1.2007 14:43:22
  4. Patton, G.: Local creation / global use : bibliographic data in the international arena (2000) 0.02
    0.02116196 = product of:
      0.08464784 = sum of:
        0.071431786 = weight(_text_:union in 183) [ClassicSimilarity], result of:
          0.071431786 = score(doc=183,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.38161996 = fieldWeight in 183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.046875 = fieldNorm(doc=183)
        0.013216058 = product of:
          0.026432116 = sum of:
            0.026432116 = weight(_text_:22 in 183) [ClassicSimilarity], result of:
              0.026432116 = score(doc=183,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.23214069 = fieldWeight in 183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=183)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    OCLC has grown from the original group of Ohio academic libraries to 27,000 libraries located in North America, Europe, Asia, Latin American, and South Africa. Each of the records in WorldCat (the OCLC Online Union Catalog) is a local creation that is available for use across the globe for different purposes. Common issues that must be faced with the expansion of a bibliographic utility include cataloging standards, subject access in languages appropriate to the user, local needs versus global usefulness, and character sets. Progress has been made with the cooperative creation of an international name authority file and the uniform application of ISBD principles. A method of linking various subject vocabularies and an improved infrastructure of MARC formats and character sets are needed. Librarians need new automated tools to provide preliminary access to date available in electronic form and to assist them in organizing and storing that data.
    Date
    10. 9.2000 17:38:22
  5. Hoffmann, L.; Schmidt, R.M.: ¬The cataloging of electronic serials in the union catalog of the North-Rhine Westphalian library network (1999) 0.02
    0.017857946 = product of:
      0.14286357 = sum of:
        0.14286357 = weight(_text_:union in 6078) [ClassicSimilarity], result of:
          0.14286357 = score(doc=6078,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.7632399 = fieldWeight in 6078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.09375 = fieldNorm(doc=6078)
      0.125 = coord(1/8)
    
  6. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.02
    0.01763497 = product of:
      0.07053988 = sum of:
        0.059526492 = weight(_text_:union in 2937) [ClassicSimilarity], result of:
          0.059526492 = score(doc=2937,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.31801665 = fieldWeight in 2937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2937)
        0.011013382 = product of:
          0.022026764 = sum of:
            0.022026764 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
              0.022026764 = score(doc=2937,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.19345059 = fieldWeight in 2937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2937)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In authorship attribution, various distance-based metrics have been proposed to determine the most probable author of a disputed text. In this paradigm, a distance is computed between each author profile and the query text. These values are then employed only to rank the possible authors. In this article, we analyze their distribution and show that we can model it as a mixture of 2 Beta distributions. Based on this finding, we demonstrate how we can derive a more accurate probability that the closest author is, in fact, the real author. To evaluate this approach, we have chosen 4 authorship attribution methods (Burrows' Delta, Kullback-Leibler divergence, Labbé's intertextual distance, and the naïve Bayes). As the first test collection, we have downloaded 224 State of the Union addresses (from 1790 to 2014) delivered by 41 U.S. presidents. The second test collection is formed by the Federalist Papers. The evaluations indicate that the accuracy rate of some authorship decisions can be improved. The suggested method can signal that the proposed assignment should be interpreted as possible, without strong certainty. Being able to quantify the certainty associated with an authorship decision can be a useful component when important decisions must be taken.
    Date
    7. 5.2016 21:22:27
  7. Lynch, C.A.: Building the infrastructure of resource sharing : union catalogs, distributed search, and cross database linkage (1997) 0.02
    0.0154654365 = product of:
      0.12372349 = sum of:
        0.12372349 = weight(_text_:union in 1506) [ClassicSimilarity], result of:
          0.12372349 = score(doc=1506,freq=6.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.6609852 = fieldWeight in 1506, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.046875 = fieldNorm(doc=1506)
      0.125 = coord(1/8)
    
    Abstract
    Effective resourcesharing presupposes an infrastructure which permits users to locate materials of interest in both print and electronic formats. 2 approaches for providing this are union catalogues and Z39.50 based distributed search systems and computer to computer information retrieval protocols. The advantages and limitations of each approach are considered, paying particular attention to a relaistic assessment of Z39.50 implementations. Argues that the union catalogue is far from obsolete and the 2 approaches should be considered complementary rather than competitive. Technologies to create links between the bibliographic apparatus of catalogues and abstracting and indexing databases and primary content in electronic form, such as the new Serial Item and Contribution Identifier (SICI) standard are also discussed as key elements in the infrastructure to support resource sharing
  8. Wakeling, S.; Clough, P.; Connaway, L.S.; Sen, B.; Tomás, D.: Users and uses of a global union catalog : a mixed-methods study of WorldCat.org (2017) 0.01
    0.014881623 = product of:
      0.119052984 = sum of:
        0.119052984 = weight(_text_:union in 3794) [ClassicSimilarity], result of:
          0.119052984 = score(doc=3794,freq=8.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.6360333 = fieldWeight in 3794, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3794)
      0.125 = coord(1/8)
    
    Abstract
    This paper presents the first large-scale investigation of the users and uses of WorldCat.org, the world's largest bibliographic database and global union catalog. Using a mixed-methods approach involving focus group interviews with 120 participants, an online survey with 2,918 responses, and an analysis of transaction logs of approximately 15 million sessions from WorldCat.org, the study provides a new understanding of the context for global union catalog use. We find that WorldCat.org is accessed by a diverse population, with the three primary user groups being librarians, students, and academics. Use of the system is found to fall within three broad types of work-task (professional, academic, and leisure), and we also present an emergent taxonomy of search tasks that encompass known-item, unknown-item, and institutional information searches. Our results support the notion that union catalogs are primarily used for known-item searches, although the volume of traffic to WorldCat.org means that unknown-item searches nonetheless represent an estimated 250,000 sessions per month. Search engine referrals account for almost half of all traffic, but although WorldCat.org effectively connects users referred from institutional library catalogs to other libraries holding a sought item, users arriving from a search engine are less likely to connect to a library.
  9. Buckland, M.K.; Butler, M.H.; Norgard, B.A.; Plaunt, C.: Union records and dossiers : extended bibliographic information objects (1994) 0.01
    0.014732054 = product of:
      0.117856435 = sum of:
        0.117856435 = weight(_text_:union in 3028) [ClassicSimilarity], result of:
          0.117856435 = score(doc=3028,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.6296408 = fieldWeight in 3028, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3028)
      0.125 = coord(1/8)
    
    Abstract
    The growing number and sophistication of online bibliographic and networked based information systems is starting to blur the once clear boundaries that separated print documents. 2 concepts emerge as a consequence of these developments, first the 'union record', an entity which combines multiple catalog records for a single bibliographic item into an extended information object; and 2nd, an information 'dossier', a hypertext-like information object built by linking several distinct but related bibliographic entites
  10. Riemer, J.J.: Adding 856 Fields to authority records : rationale and implications (1998) 0.01
    0.014732054 = product of:
      0.117856435 = sum of:
        0.117856435 = weight(_text_:union in 3715) [ClassicSimilarity], result of:
          0.117856435 = score(doc=3715,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.6296408 = fieldWeight in 3715, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3715)
      0.125 = coord(1/8)
    
    Abstract
    Discusses ways of applying MARC Field 856 (Electronic Location and Access) to authority records in online union catalogues. In principle, each catalogue site location can be treated as the electronic record of the work concerned and the MARC Field 856 can then refer to this location as if it were referring to the location of a primary record. Although URLs may become outdated, the fact that they are located in specifically defined MARC Fields makes the data contained amenable to the same link maintenance software ae used for the electronic records themselves. Includes practical examples of typical union catalogue records incorporating MARC Field 856
  11. Taylor, M.; Winstanley, B.: Bibliographic control of computer files : the feasibility of a union catalogue of computer files (1990) 0.01
    0.014732054 = product of:
      0.117856435 = sum of:
        0.117856435 = weight(_text_:union in 832) [ClassicSimilarity], result of:
          0.117856435 = score(doc=832,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.6296408 = fieldWeight in 832, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0546875 = fieldNorm(doc=832)
      0.125 = coord(1/8)
    
    Abstract
    Describes a project based at the ESRC Data Archive, Esses University to examine standards for cataloguing computer files and the feasibility of creating a union catalogue of computer files. A pilot scheme was set up to enable the MARC record output of the ESRC Data Archive to be merged with the software records of the NISS (National Information on Software and Services) data base, which is available on the JANET network.
  12. Cousins, S.A.: Duplicate detection and record consolidation in large bibliographic databases : the COPAC database experience (1998) 0.01
    0.012627475 = product of:
      0.1010198 = sum of:
        0.1010198 = weight(_text_:union in 2833) [ClassicSimilarity], result of:
          0.1010198 = score(doc=2833,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5396921 = fieldWeight in 2833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.046875 = fieldNorm(doc=2833)
      0.125 = coord(1/8)
    
    Abstract
    COPAC (CURL OPAC) is a union catalogue, based on records supplied by members of the Consortium of University Libraries (CURL), giving access to the online catalogue records of some of the largest academic research libraries in the UK and Ireland. Like all union catalogues, COPAC is supplied with multiple copies of records representing the same document in the contributing library catalogues. To reduce the level of duplication visible to the COPAC user, duplicate detection and record consolidation procedures have been developed. These result in the production of a single record for each document, representing the holdings of several libraries. Discusses the ways in which both the duplicate detection and record consolidation procedures are carried out, and problem areas encountered. Describes the general structure of these procedures, providing a model of the duplicate record handling mechanisms used in COPAC
  13. Meir, D.D.; Lazinger, S.S.: Measuring the performance of a merging algorithm : mismatches, missed-matches, and overlap in Israel's union list (1998) 0.01
    0.012627475 = product of:
      0.1010198 = sum of:
        0.1010198 = weight(_text_:union in 3382) [ClassicSimilarity], result of:
          0.1010198 = score(doc=3382,freq=4.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5396921 = fieldWeight in 3382, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.046875 = fieldNorm(doc=3382)
      0.125 = coord(1/8)
    
    Abstract
    Reports results of a survey, undertaken in 1996, to measure the performance of the merging algorithm used to generate the now defunct ALEPH ULM (Union List of Monographs) file. Results showed that although the algorithm created almost no mismatches that would have led to the loss of information, it had a greater proportion of missed matches than was anticipated, especially when matching Hebrew bibliographic records. Discusses the central issues inherent in automatic detection and merging of duplicate records, as well as the main methodologies for measuring the performance of merging algorithms. Recommendations include integrating testing procedures into the initial specifications for any future algorithms and deciding on a performance threshold that the algorithm must exceed in order to be put to use
  14. Ostermann, D.: US-Terrorfahnder verheddern sich im Daten-Dickicht (2004) 0.01
    0.012344478 = product of:
      0.04937791 = sum of:
        0.04166854 = weight(_text_:union in 2124) [ClassicSimilarity], result of:
          0.04166854 = score(doc=2124,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.22261164 = fieldWeight in 2124, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2124)
        0.007709367 = product of:
          0.015418734 = sum of:
            0.015418734 = weight(_text_:22 in 2124) [ClassicSimilarity], result of:
              0.015418734 = score(doc=2124,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.1354154 = fieldWeight in 2124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2124)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    "So verständlich es ist, dass die USRegierung im Kampf gegen den Terror lieber einen Flug zu viel als zu wenig stoppt, so peinlich müsstees ihr sein, dass jetzt Versäumnisse offenbar werden, die Kritiker schon lange beklagen. Die US-Sicherheitsbehörden schlagen sich mit untauglichen Instrumenten herum: Noch immer gibt es in den USA keine zentrale Datenbank, in der alle Informationen über Terrorverdächtige zusammenfließen. Internationale Fluggesellschaften haben in der vergangenen Woche aus Sicherheitsgründen etliche Flüge in die USA storniert. Wenn sie Ziele in den Vereinigten Staaten anfliegen, müssen sie ihre Passagierlisten vorab an die US-Behörden weiterreichen. Der Europäischen Union hat Washington gerade erst das Recht abgepresst, die Daten unbescholtener Fluggäs- te jahrelang -zu speichern. Doch die Empfänger in den Vereinigten Staaten, sind offenbar nicht in der Lage, den Datenmüll von täglich mehreren hundert Flügen zu verarbeiten, Anders ist die Verwechslung eines Fünfjährigen mit einem mutmaßlichen tunesischen Extremisten an Bord einer Air-France-Maschine vorige Woche kaum zu erklären. Vor allem aber fehlt weiter eben jene zentrale Terror-Liste, mit der die Passagierdaten zuverlässig abgeglichen werden könnten. Stattdessen führt jede US-Behörde eigene "schwarze Listen". Das General Accounting Office (GAO), die Prüfbehörde des Kongresses, hat allein zwölf Karteien der US-Regierung gezählt, in der Terrorverdächtige erfasst werden. Der Geheimdienst CIA hat eine, der U.S. Marshals Service und das Pentagon. Das Außenministerium, zuständig für Einreisevisa, hat zwei Datenbanken. Die Bundespolizei FBI hat drei, das Ministerium für Heimatschutz vier, darunter die "No-fly"-Liste mit Personen, die nicht an Bord von Flugzeugen gelassen werden sollen. Doch wer etwa vom FBI dem terroristischen Umfeld zugerechnet wird, muss dort nicht registriert sein. Die vielen Karteien und die schlechte Koordination führte schon oft zu folgenschweren Pannen. So erhielten zwei der späteren Attentäter beim ersten Bombenanschlag auf das World Trade Center 1993 ein US-Visum, obwohl sie in einer "Watch"-Liste des FBI verzeichnet waren. Neun Jahre später kamen zwei der Attentäter des 11. September legal ins Land, obwohl das FBI nach ihnen Ausschau hielt. Auch hier fehlten die Namen auf der Liste der Einreisebehörden. Bürokratische und rechtliche Hindernisse sowie technische Schwierigkeiten haben die Einrichtung einerzentralen Kartei bislang verhindert. Unterschiedliche Schreibweisen etwa von arabischen Namen, abweichende Geburtsdaten oder die Verwendung von Aliasnamen haben sich als Hürden erwiesen. Auch ließ sich die Bush-Regierung mit dem Projekterstaunlich viel Zeit. Erst nachdem das GAO voriges Jahr die schleppenden Arbeiten kritisiert hatte, beschloss die Regierung laut Wall Street Journal im September die Einrichtung einer zentralen Informations-Sammelstelle, das Terrorist Screening Center (TSC). Das Zentrum soll demnach jetzt in die "Tipoff "-Liste des Außenministeriums die Informationen der elf anderen Datenbanken einbauen. Mit der Mammutaufgabe begonnen hat das TSC erst am ersten Dezember-drei Wochen bevor wegen der Angst vor neuen Flugzeuganschlägen die Warnstufe "Orange" ausgerufen wurde."
    Date
    5. 1.1997 9:39:22
  15. Viswanathan, C.G.: Cataloguing:theory & practice (2007) 0.01
    0.012344478 = product of:
      0.04937791 = sum of:
        0.04166854 = weight(_text_:union in 1475) [ClassicSimilarity], result of:
          0.04166854 = score(doc=1475,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.22261164 = fieldWeight in 1475, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1475)
        0.007709367 = product of:
          0.015418734 = sum of:
            0.015418734 = weight(_text_:22 in 1475) [ClassicSimilarity], result of:
              0.015418734 = score(doc=1475,freq=2.0), product of:
                0.113862485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032515142 = queryNorm
                0.1354154 = fieldWeight in 1475, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1475)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    Inhalt: 1. Library Catalogue : Its Nature, Factions, and Importance in a Library System 2. History of Modern Library Catalogues 3. Catalogue Codes: Origin, Growth and Development 4. Principles of Planning and Provision of the Library Catalogue 5.Catalogue Entries and their Functions in Achieving the Objectives of the Library Catalogue 6.Descriptive Cataloguing 7. Physical Forms of the Catalogue-I Manual Catalogues 8. Physical Forms of the Catalogues-II Computerised Cataloges 9. Varieties of Catalogues, their Scope and Functions 10. Subject Cataloguing 11. Cataloguing Department: Organization and Administration. 12. Cost Analysis of Cataloguing Procedures and Suggested Economies 13. Co-operation and Centralization in Cataloguing 14. Union Catalogues and Subject Specialisation 15. Cataloguing of Special Material 16. Arrangement, Filing, Guiding of catalogue and Instructions for its Use 17. Education and Training of Cataloguers 18.Documentation : An Extension of Cataloguing and Classification Applied to Isolates 19.Catalogue Cards, Their Style and Reproduction Methods 20. Work of Personal Authors 21. Choice and Entry of Personal Names 22. Works of Corporate Authors 23. Legal Publications 24. Choice of Headings for Corporate Bodies 25. Works of Unknown Authorship : Entry under Uniform Titles 26. Acces Points to Books and Meta- Books by A-ACR2 27. AACR2 1988 revision : Choice of Access Points to Name Headings and Uniform Titles 28. Added Entries Other Than Subject Entries 29. Subject Entries 30. Analytiacal Entries 31. Series Note and Series Entry 32. Contents, Notes and Annotation 33. References 34. Display of Entries Appendix-I Select Aids and Guides for the Cataloguer Appendix-II Definitions of Terms Commonly used in Cataloguing Appendix-III Cataloguing Examination: Select Questions Appendix-IV Implications of the adoption of A-ACR2
  16. Verwer, R.: Waar is W.F. Hermans? : het bedrog van de OPC (1996) 0.01
    0.011905298 = product of:
      0.09524238 = sum of:
        0.09524238 = weight(_text_:union in 4919) [ClassicSimilarity], result of:
          0.09524238 = score(doc=4919,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5088266 = fieldWeight in 4919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0625 = fieldNorm(doc=4919)
      0.125 = coord(1/8)
    
    Abstract
    A study of the online catalogues of major academic libraries and databases in the Netherlands shows considerable variation in the form of name used for the author W.F. Hermans. The problem lies with a lack of authority control in headings used in the Dutch national union Pica catalogue. Reactions from 2 cataloguers point to the difficulties in maintaining catalogues in the face of reduced funding and to the important role played by the Pica project in improving library services and reducing cataloguing backlogs
  17. Reeb, R.: ¬A quantitative method for evaluating the quality of cataloging (1984) 0.01
    0.011905298 = product of:
      0.09524238 = sum of:
        0.09524238 = weight(_text_:union in 335) [ClassicSimilarity], result of:
          0.09524238 = score(doc=335,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5088266 = fieldWeight in 335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0625 = fieldNorm(doc=335)
      0.125 = coord(1/8)
    
    Abstract
    As a quality control measure particularly within the context of a union database like OCLC, cataloging revision can eliminate many errors which might otherwise be input. Based on this revision process, a sratistical method for evaluating the quality of a cataloger's work was developed. The rationale, method of scoring, and establishment of a standard are discussed.
  18. Markiw, M.: Establishing Slavic headings under AACR2 (1984) 0.01
    0.011905298 = product of:
      0.09524238 = sum of:
        0.09524238 = weight(_text_:union in 341) [ClassicSimilarity], result of:
          0.09524238 = score(doc=341,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5088266 = fieldWeight in 341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0625 = fieldNorm(doc=341)
      0.125 = coord(1/8)
    
    Abstract
    This paper discusses some common problems which catalogers of Slavic materials may encounter in establishing Slavic headings under AACR2. Three categories of headings have been selected: geographical, corporate and personal names concerned with the Soviet Union. Emphasis is placed upon cases where a cataloger may apply the rules correctly, but still establish an incorrect heading. Sources of these types of problems are identified and suggestions are made for dealing with them.
  19. Graham, C.: Rethinking national policy for cataloging microform reproductions (1986) 0.01
    0.011905298 = product of:
      0.09524238 = sum of:
        0.09524238 = weight(_text_:union in 371) [ClassicSimilarity], result of:
          0.09524238 = score(doc=371,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5088266 = fieldWeight in 371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0625 = fieldNorm(doc=371)
      0.125 = coord(1/8)
    
    Abstract
    Current national cataloging policy requires the creation of unique cataloging records for an original publication and each of its microfilm reproductions. Such redundant entries are difficult to decipher and expensive to produce and maintain. The case of serial publications is most urgent, especially due to the proliferation of preservation microfilming efforts and union list projects. The master record concept used in the United States Newspaper Project offers a viable alternative method. Librarians should lobby to have the single record approach adopted as national policy.
  20. Passini Moreno, F.; Bräscher, M.: FRBR - Functional Requirements for Bibliographic Records : un studio en un catálogo colectivo brasileño (2007) 0.01
    0.011905298 = product of:
      0.09524238 = sum of:
        0.09524238 = weight(_text_:union in 1125) [ClassicSimilarity], result of:
          0.09524238 = score(doc=1125,freq=2.0), product of:
            0.18718043 = queryWeight, product of:
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.032515142 = queryNorm
            0.5088266 = fieldWeight in 1125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.756716 = idf(docFreq=379, maxDocs=44218)
              0.0625 = fieldNorm(doc=1125)
      0.125 = coord(1/8)
    
    Footnote
    Originaltitel: FRBR - Functional Requirements for Bibliographic Records: a study of a Brazilian union catalogue

Authors

Years

Languages

  • e 182
  • d 42
  • i 3
  • nl 2
  • es 1
  • f 1
  • s 1
  • More… Less…

Types

  • a 213
  • b 15
  • m 15
  • s 9
  • el 3
  • ? 1
  • r 1
  • x 1
  • More… Less…