Search (217 results, page 1 of 11)

  • × theme_ss:"Formalerschließung"
  1. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.06
    0.06048537 = product of:
      0.09072805 = sum of:
        0.07066846 = weight(_text_:query in 4199) [ClassicSimilarity], result of:
          0.07066846 = score(doc=4199,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 4199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=4199)
        0.020059591 = product of:
          0.040119182 = sum of:
            0.040119182 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
              0.040119182 = score(doc=4199,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.23214069 = fieldWeight in 4199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Educational standards are a central focus of the current educational system in the United States, underpinning educational practice, curriculum design, teacher professional development, and high-stakes testing and assessment. Digital library users have requested that this information be accessible in association with digital learning resources to support teaching and learning as well as accountability requirements. Providing this information is complex because of the variability and number of standards documents in use at the national, state, and local level. This article describes a cataloging tool that aids catalogers in the assignment of standards metadata to digital library resources, using natural language processing techniques. The research explores whether the standards suggestor service would suggest the same standards as a human, whether relevant standards are ranked appropriately in the result set, and whether the relevance of the suggested assignments improve when, in addition to resource content, metadata is included in the query to the cataloging tool. The article also discusses how this service might streamline the cataloging workflow.
    Date
    22. 1.2011 14:25:32
  2. Nistico, R.: Studio e indicizzazione delle dediche librarie (1998) 0.06
    0.05529356 = product of:
      0.16588068 = sum of:
        0.16588068 = sum of:
          0.11907496 = weight(_text_:page in 2823) [ClassicSimilarity], result of:
            0.11907496 = score(doc=2823,freq=2.0), product of:
              0.27565226 = queryWeight, product of:
                5.5854197 = idf(docFreq=450, maxDocs=44218)
                0.049352113 = queryNorm
              0.43197528 = fieldWeight in 2823, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.5854197 = idf(docFreq=450, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2823)
          0.046805713 = weight(_text_:22 in 2823) [ClassicSimilarity], result of:
            0.046805713 = score(doc=2823,freq=2.0), product of:
              0.1728227 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049352113 = queryNorm
              0.2708308 = fieldWeight in 2823, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2823)
      0.33333334 = coord(1/3)
    
    Abstract
    Book dedications by authors, often in verse form and appearing just before the title page, are one of the 6 elements describes by the French scholar Genette as paratextual. For some reasons dedications have failed to interest librarians, yet books containing them can be a valid object of bibliographic study, for the reasons that they carry special markings: are an example of a specific literary or semantic class; and reveal linguistic/stylistic features. Examines the history of literary dedications, citing examples by well-known writers, and suggests that cataloguing software should have a special field to record dedication
    Date
    22. 2.1999 20:41:06
  3. Lundy, M.W.: Use and perception of the DCRB Core standard (2003) 0.05
    0.05040447 = product of:
      0.075606704 = sum of:
        0.05889038 = weight(_text_:query in 153) [ClassicSimilarity], result of:
          0.05889038 = score(doc=153,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.25674784 = fieldWeight in 153, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=153)
        0.016716326 = product of:
          0.03343265 = sum of:
            0.03343265 = weight(_text_:22 in 153) [ClassicSimilarity], result of:
              0.03343265 = score(doc=153,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.19345059 = fieldWeight in 153, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=153)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In January 1999, the Program for Cooperative Cataloging approved the core bibliographic standard for rare books, called the DCRB Core standard. Like the other core standards, the DCRB Core provides the framework within which catalogers can create bibliographic records that are less than full, but are as reliable as full-level records in description and authorized headings. In the three years since its approval, there is little evidence that the standard has been widely used. This study reports the results of a survey sent to forty-three participants who indicated in a preliminary query that they do use the DCRB Core or that they have made the decision not to use it. In the thirty-seven surveys that were returned, only about 16% of the respondents said they have used the standard to create bibliographic records for their rare books. The libraries that do not use the core standard find it inferior or lacking in a number of ways. Several of those libraries, however, are planning to use the standard in the future or are seriously planning to investigate using it. Such intent may indicate that the time is approaching when more libraries will find reasons to implement the standard. One impetus may come from the findings of a recent survey of the special collections departments of member libraries of the Association of Research Libraries that emphasize the size of the backlogs in those departments. If faster accessibility to specific portions of the backlogs would benefit users more than having fulllevel cataloging, application of the DCRB Core standard could facilitate reducing those backlogs.
    Date
    10. 9.2000 17:38:22
  4. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.05
    0.05040447 = product of:
      0.075606704 = sum of:
        0.05889038 = weight(_text_:query in 2937) [ClassicSimilarity], result of:
          0.05889038 = score(doc=2937,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.25674784 = fieldWeight in 2937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2937)
        0.016716326 = product of:
          0.03343265 = sum of:
            0.03343265 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
              0.03343265 = score(doc=2937,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.19345059 = fieldWeight in 2937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2937)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In authorship attribution, various distance-based metrics have been proposed to determine the most probable author of a disputed text. In this paradigm, a distance is computed between each author profile and the query text. These values are then employed only to rank the possible authors. In this article, we analyze their distribution and show that we can model it as a mixture of 2 Beta distributions. Based on this finding, we demonstrate how we can derive a more accurate probability that the closest author is, in fact, the real author. To evaluate this approach, we have chosen 4 authorship attribution methods (Burrows' Delta, Kullback-Leibler divergence, Labbé's intertextual distance, and the naïve Bayes). As the first test collection, we have downloaded 224 State of the Union addresses (from 1790 to 2014) delivered by 41 U.S. presidents. The second test collection is formed by the Federalist Papers. The evaluations indicate that the accuracy rate of some authorship decisions can be improved. The suggested method can signal that the proposed assignment should be interpreted as possible, without strong certainty. Being able to quantify the certainty associated with an authorship decision can be a useful component when important decisions must be taken.
    Date
    7. 5.2016 21:22:27
  5. Ya-Ning, C.; Hao-Ren, K.: FRBRoo-based approach to heterogeneous metadata integration (2013) 0.03
    0.03400038 = product of:
      0.10200114 = sum of:
        0.10200114 = weight(_text_:query in 1765) [ClassicSimilarity], result of:
          0.10200114 = score(doc=1765,freq=6.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.44470036 = fieldWeight in 1765, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1765)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper seeks to adopt FRBRoo as an ontological approach to integrate heterogeneous metadata, and transform human-understandable format into machine-understandable format for semantic query. Design/methodology/approach - Two cases of use with museum artefacts and literary works were exploited to illustrate how FRBRoo can be used to re-contextualize the semantics of elements and the semantic relationships embedded in those elements. The shared ontology was then RDFized and examples were explored to examine the feasibility of the proposed approach. Findings - FRBRoo can play a role as inter lingua aligning museum and library metadata to achieve heterogeneous metadata integration and semantic query without changing either of the original approaches to fit the other. Research limitations/implications - Exploration of more diverse use cases is required to further align the different approaches of museums and libraries using FRBRoo and make revisions. Practical implications - Solid evidence is provided for the use of FRBRoo in heterogeneous metadata integration and semantic query. Originality/value - This is the first study to elaborate how FRBRoo can play a role as a shared ontology to integrate the heterogeneous metadata generated by museums and libraries. This paper also shows how the proposed approach is distinct from the Dublin Core format crosswalk in re-contextualizing semantic meanings and their relationships, and further provides four new sub-types for mapping description language.
  6. Velluci, S.L.: Options for organizing electronic resources : the coexistence of metadata (1997) 0.03
    0.027482178 = product of:
      0.08244653 = sum of:
        0.08244653 = weight(_text_:query in 6863) [ClassicSimilarity], result of:
          0.08244653 = score(doc=6863,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.35944697 = fieldWeight in 6863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6863)
      0.33333334 = coord(1/3)
    
    Abstract
    At present cataloguing of Internet resources are on 2 levels. At level 1, the description of resources is contained in local library catalogues, along with bibliographic surrogates for all other materials that the library access, based on AACR2/MARC systems. At level 2, Internet resources are organized independently of any library agency. These include separate catalogues of selected resources, subject browsing lists and robot-generated search tools, and focus exclusively on Internet resources. A 3rd level needs to be developed - a metacatalogue - whereby a user can identify specific library catalogues to include in a search query of other Internet databases
  7. RAK-NBM : Interpretationshilfe zu NBM 3b,3 (2000) 0.03
    0.025216486 = product of:
      0.075649455 = sum of:
        0.075649455 = product of:
          0.15129891 = sum of:
            0.15129891 = weight(_text_:22 in 4362) [ClassicSimilarity], result of:
              0.15129891 = score(doc=4362,freq=4.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.8754574 = fieldWeight in 4362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4362)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2000 19:22:27
  8. Salarelli, A.: Nella notte dove tuttel la vacche sono nere qualcuno prova ad accendere un cerino (1996) 0.02
    0.023556154 = product of:
      0.07066846 = sum of:
        0.07066846 = weight(_text_:query in 5763) [ClassicSimilarity], result of:
          0.07066846 = score(doc=5763,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 5763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=5763)
      0.33333334 = coord(1/3)
    
    Abstract
    Library science may well have an essential role to play in efficiently organising the huge amount of Internet information available in the various scientific disciplines. The basic problem is to develop a cataloguing theory sufficiently flexible to cope with the impact of an ever changing store of network data. Such a theory would abondon the utopian idea of a 'catalogue of ctalogues', seeking instead to match each specific user query to the most appropriate catalogue. Examines 2 important USA projects for cataloguing network resources: Digital Libraries Research (funded by the National Science Foundation), which uses a combination of search engines to retrieve net data; and the Internet Public Library. Lists the Management and Library Schools now on the WWW
  9. Svenonius, E.; Molto, M.: Automatic derivation of name access points in cataloging (1990) 0.02
    0.022680946 = product of:
      0.06804284 = sum of:
        0.06804284 = product of:
          0.13608567 = sum of:
            0.13608567 = weight(_text_:page in 3569) [ClassicSimilarity], result of:
              0.13608567 = score(doc=3569,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.49368602 = fieldWeight in 3569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3569)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports the results of research designed to explore the feasibility of automatically deriving name access points from machine readable title pages of English language monographs. Results show that approximately 88% of the access points selected by the Library of Congress or the National Library of Medicine could be automatically derived from title page data. These results have implications for the design of bibliographic standards and on-line catalogues.
  10. Taniguchi, S.: ¬A system for supporting evidence recording in bibliographic records (2006) 0.02
    0.02004731 = product of:
      0.060141932 = sum of:
        0.060141932 = product of:
          0.120283864 = sum of:
            0.120283864 = weight(_text_:page in 282) [ClassicSimilarity], result of:
              0.120283864 = score(doc=282,freq=4.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.4363609 = fieldWeight in 282, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=282)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Recording evidence for data values, in addition to the values themselves, in bibliographic records and descriptive metadata has been proposed in a previous study. Recorded evidence indicates why and how data values are recorded for elements. As a continuation of that study, this article first proposes a scenario in which a cataloger and a system interact with each other in recording evidence in bibliographic records for books, with the aim of minimizing costs and effort in recording evidence. Second, it reports on prototype system development in accordance with the scenario. The system (1) searches a string, corresponding to the data value entered by a cataloger or extracted from the Machine Readable Cataloging (MARC) record, within the scanned and optical character recognition (OCR)-converted title page and verso of the title page of an item being cataloged; (2) identifies the place where the string appears within the source of information; (3) identifies the procedure being used to form the value entered or recorded; and finally (4) displays the place and procedure identified for the data value as its candidate evidence. Third, this study reports on an experiment conducted to examine the system's performance. The results of the experiment show the usefulness of the system and the validity of the proposed scenario.
  11. Simpson, P.; Banach, S.: Finding the missing link : how cataloging bridges the gap between libraries and the Internet (1997) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 735) [ClassicSimilarity], result of:
              0.11907496 = score(doc=735,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Information from government sources is being added to the Internet at an ever increasing rate. Describes how cataloguers at Pennsylvania State University are working with AACR2, OCLC's Internet cataloguing project (Intercat), and the creators of the Pennsylvania State Libraries' WWW home page to include both Internet sites and electronic publications in the library's online catalogue. Demonstartes the use of cataloguing records to show relationships between Internet resources and the printed materials that they supplement or replace
  12. Chroust, D.Z.: Finding the missing date : the examples of German imprints without dates (1997) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 365) [ClassicSimilarity], result of:
              0.11907496 = score(doc=365,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Cataloguers of monographs are accustomed to finding a publication, copyright or at least a printing date in an imprint on the title page or in a colophon. If no date is found in the 'prescribed source of information', rule 1.4F7 of AACR2 instructs cataloguers to 'supply an approximate date of publication'. Examines efficient ways of doing this, using examples from German publishing. In the 19th and early 20th centuries, German publishers often omitted the year of publication from their imprints. Methods and examples presented are drawn from cataloguing experience at Texas A&M University since 1992, when the library purchased several thousand German books of this period
  13. Bowman, J.H.: Sic catalog syndrome : title page transcription as barrier to retrieval (2001) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 5421) [ClassicSimilarity], result of:
              0.11907496 = score(doc=5421,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 5421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5421)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  14. Leung, S.W.: MARC CIP records and MARC LC records : an evaluative study of their discrepancies (1983) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 327) [ClassicSimilarity], result of:
              0.11907496 = score(doc=327,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 327, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=327)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the last ten years, Cataloging in Publication (CIP) records have gained increasing acceptance and use in libraries, especially for cataloging purposes. Nevertheless, there is a general perception that the accuracy of CIP records can be further improved. Because improvement is only possible with more concrete information identifying specific problem areas, this study is designed to provide catalogers and cataloging managers more empirical data on the frequency and types of discrepancy between MARC CIP records and subsequent MARC LC records. This study differs from an earlier study which involved CIP records that appeared on the verso of the title page of publications. In addition, this study will make some observations regarding more effective use of the CIP records, primarily for cataloging purposes.
  15. Jeng, L.H.: ¬An expert system for determining title proper in descriptive cataloging : a conceptual model (1986) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 375) [ClassicSimilarity], result of:
              0.11907496 = score(doc=375,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 375, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=375)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The human process of determining bibliographic data from title pages of monographs is complex, yet systematic. This paper investigates the intellectual process involved, in conceptual and logical levels, by proposing a model of the expert system for determining title proper as the first element of the first area in ISBD. It assumes that the title page of a monograph consists of more than one block of character or graphic representation. Each block has its physical and content characteristics and can be separated from other blocks by separators. Three categories of expert knowledge are identified, and the system model is discussed along with its individual system component. It applies the "list" concept for the system data structure and addresses the potentiality of this conceptual model.
  16. Anderson, B.: Expert systems for cataloging : will they accomplish tomorrow the cataloging of today? (1990) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 481) [ClassicSimilarity], result of:
              0.11907496 = score(doc=481,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=481)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The motivation of decreasing cataloging costs by minimizing the role of the professional librarian in the cataloging process ha led to experiments in the application of expert systems to cataloging. Systems have been developed to accomplish specific elements of the cataloging process through human-machine interface or through automatic reading and interpretation of title pages. All of the systems developed thus far require a human cataloger to participate in, monitor, and/or complete the cataloging process. Furthermore, the expert systems developed for descriptive cataloging are based on the logic and rules of AACR2 and its dependence on title page information, neither of which may be relevant in their current form as cataloging comes to terms with electronic publishing and full-text retrieval of information.
  17. Forero, D.; Peterson, N.; Hamilton, A.: Building an institutional author search tool (2019) 0.02
    0.019845828 = product of:
      0.05953748 = sum of:
        0.05953748 = product of:
          0.11907496 = sum of:
            0.11907496 = weight(_text_:page in 5441) [ClassicSimilarity], result of:
              0.11907496 = score(doc=5441,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.43197528 = fieldWeight in 5441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5441)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ability to collect time-specific lists of faculty publications has become increasingly important for academic departments. At OHSU publication lists had been retrieved manually by a librarian who conducted literature searches in bibliographic databases. These searches were complicated and time consuming, and the results were large and difficult to assess for accuracy. The OHSU library has built an open web page that allows novices to make very sophisticated institution-specific queries. The tool frees up library staff, provides users with an easy way of retrieving reliable local publication information from PubMed, and gives an opportunity for more sophisticated users to modify the algorithm or dive into the data to better understand nuances from a strong jumping off point.
  18. Ruiz-Perez, R.: Consequences of applying cataloguing codes for author entries to the Spanish National Library online catalogs (2001) 0.02
    0.019642275 = product of:
      0.058926824 = sum of:
        0.058926824 = product of:
          0.11785365 = sum of:
            0.11785365 = weight(_text_:page in 5435) [ClassicSimilarity], result of:
              0.11785365 = score(doc=5435,freq=6.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.42754465 = fieldWeight in 5435, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5435)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this empirical study of a sample of catalog records I investigate the implications for information retrieval of the rules for choosing author access points in online catalogs. Aims: To obtain data that can be used to inform a revision of current cataloguing rules, and to propose more functional criteria aimed at improving the retrieval of information located on the basis of author names. Material and methods: A total of 838 records from the Biblioteca Nacional Española (Spanish National Library) were examined to analyze the use of authorities as access points. Authors were classified as creative or non-creative to facilitate the analysis. The variables investigated were author source location, potential author access points, actual entries used in the record, and loss of potential entry points. Results: A total of 3566 potential author access points were identified (mean of 4.25 per record). The title page yielded 57.3% of all potential access points, the table of contents yielded 33.5%, and other sources accounted for the remaining 9.1%. A total of 2125 potential authors were not used as access points in the records (overall loss of 59.5%). A total of 960 authors named on the title page were not used as entries (30.23% loss). In works with up to three authors per responsibility function, 24.8% of the authors were not used as entry points. In works with more than three authors, 75.2% of the potential access points were unused. Discussion and conclusions: A significant proportion of potential access points from the table of contents and the title page went unused. If the access points from these sources were used, author indexes would be more complete and accurate, and retrieval with online catalogs would be more efficient. I suggest that losses for creative authors were caused by neglect of the table of contents as a source of entries, strict application of the rule of three, and other specific factors. Losses for non-creative authors were caused by ambiguities and gaps in current cataloguing rules for choosing added author entries. The findings support the urgent need to revise cataloguing rules for author access points to make them more flexible, more practical, and more in line with actual responsibility functions and types of authorship.
  19. Carter, J.A.: PASSPORT/PRISM: authors and titles and MARC : oh my! (1993) 0.02
    0.017830748 = product of:
      0.053492244 = sum of:
        0.053492244 = product of:
          0.10698449 = sum of:
            0.10698449 = weight(_text_:22 in 527) [ClassicSimilarity], result of:
              0.10698449 = score(doc=527,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.61904186 = fieldWeight in 527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=527)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    OCLC systems and services. 9(1993) no.3, S.20-22
  20. Madison, O.M:A.: ¬The role of the name main-entry heading in the online environment (1992) 0.02
    0.017830748 = product of:
      0.053492244 = sum of:
        0.053492244 = product of:
          0.10698449 = sum of:
            0.10698449 = weight(_text_:22 in 4397) [ClassicSimilarity], result of:
              0.10698449 = score(doc=4397,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.61904186 = fieldWeight in 4397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Serials librarian. 22(1992), S.371-391

Years

Languages

  • e 175
  • d 37
  • i 3
  • f 1
  • s 1
  • More… Less…

Types

  • a 203
  • b 15
  • m 10
  • s 5
  • el 2
  • ? 1
  • x 1
  • More… Less…