Search (38 results, page 1 of 2)

  • × theme_ss:"Formalerschließung"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Baga, J.; Hoover, L.; Wolverton, R.E.: Online, practical, and free cataloging resources (2013) 0.07
    0.074862026 = product of:
      0.14972405 = sum of:
        0.13135578 = weight(_text_:graphic in 2603) [ClassicSimilarity], result of:
          0.13135578 = score(doc=2603,freq=2.0), product of:
            0.29924196 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.045191016 = queryNorm
            0.43896174 = fieldWeight in 2603, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=2603)
        0.018368276 = product of:
          0.03673655 = sum of:
            0.03673655 = weight(_text_:22 in 2603) [ClassicSimilarity], result of:
              0.03673655 = score(doc=2603,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.23214069 = fieldWeight in 2603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2603)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This comprehensive annotated webliography describes online cataloging resources that are free to use, currently updated, and of high quality. The major aim of this webliography is to provide assistance for catalogers who are new to the profession, unfamiliar with cataloging specific formats, or unable to access costly print and subscription resources. The annotated resources include general websites and webpages, databases, workshop presentations, streaming media, and local documentation. The scope of the webliography is limited to resources reflecting traditional cataloging practices using the Anglo-American Cataloguing Rules, 2nd edition, RDA: Resource Description and Access, and MAchine Readable Cataloging (MARC) standards. Non-MARC metadata schemas like Dublin Core are not covered. Most components of cataloging are represented in this webliography, such as authority control, classification, subject headings, and genre terms. Guidance also is provided for cataloging miscellaneous formats including sound and videorecordings, streaming media, e-books, video games, graphic novels, kits, rare materials, maps, serials, realia, government documents, and music.
    Date
    10. 9.2000 17:38:22
  2. Theise, A.: Possibilities for standardized cataloging of prints : the collection of engravings at the Hamburg State and University Library (2016) 0.07
    0.06567789 = product of:
      0.26271155 = sum of:
        0.26271155 = weight(_text_:graphic in 5125) [ClassicSimilarity], result of:
          0.26271155 = score(doc=5125,freq=8.0), product of:
            0.29924196 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.045191016 = queryNorm
            0.8779235 = fieldWeight in 5125, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=5125)
      0.25 = coord(1/4)
    
    Abstract
    German museums keep over 5,000,000 graphic prints in their graphic collections from the late Middle Ages to the present. This figure comes from a survey of 2006: "Graphische Sammlungen," www.graphischesammlungen.de/index.php?view=detail&id=23 (accessed February 4, 2016). Due to the poor availability of data it is hard to ascertain how many additional sheets "slumber" in libraries and archives. Libraries often keep conglomerations of graphic sheets, which have grown over the centuries by bequests and donations, without being accessible in a systematic way for the users of a collection. Such a collection is the small but excellent collection of engravings at the Hamburg State and University Library. This article will propose how Resource Description and Access (RDA) can be adapted in such a way that our special graphic material can be made accessible and be used and how a standardized set of elements can be developed.
  3. Knowlton, S.A.: Power and change in the US cataloging community (2014) 0.02
    0.024838142 = product of:
      0.09935257 = sum of:
        0.09935257 = sum of:
          0.056493253 = weight(_text_:methods in 2599) [ClassicSimilarity], result of:
            0.056493253 = score(doc=2599,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.31093797 = fieldWeight in 2599, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2599)
          0.042859312 = weight(_text_:22 in 2599) [ClassicSimilarity], result of:
            0.042859312 = score(doc=2599,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.2708308 = fieldWeight in 2599, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2599)
      0.25 = coord(1/4)
    
    Abstract
    The US cataloging community is an interorganizational network with the Library of Congress (LC) as the lead organization, which reserves to itself the power to shape cataloging rules. Peripheral members of the network who are interested in modifying changes to the rules or to the network can use various strategies for organizational change that incorporate building ties to the decision-makers located at the hub of the network. The story of William E. Studwell's campaign for a subject heading code illustrates how some traditional scholarly methods of urging change-papers and presentations-are insufficient to achieve reform in an interorganizational network, absent strategies to build alliances with the decision makers.
    Date
    10. 9.2000 17:38:22
  4. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.02
    0.024838142 = product of:
      0.09935257 = sum of:
        0.09935257 = sum of:
          0.056493253 = weight(_text_:methods in 2606) [ClassicSimilarity], result of:
            0.056493253 = score(doc=2606,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.31093797 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
          0.042859312 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
            0.042859312 = score(doc=2606,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.2708308 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
      0.25 = coord(1/4)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  5. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.02
    0.021289835 = product of:
      0.08515934 = sum of:
        0.08515934 = sum of:
          0.048422787 = weight(_text_:methods in 4190) [ClassicSimilarity], result of:
            0.048422787 = score(doc=4190,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.26651827 = fieldWeight in 4190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.046875 = fieldNorm(doc=4190)
          0.03673655 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
            0.03673655 = score(doc=4190,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.23214069 = fieldWeight in 4190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4190)
      0.25 = coord(1/4)
    
    Abstract
    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, comprising 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments.
    Date
    22. 1.2011 13:06:52
  6. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.02
    0.01774153 = product of:
      0.07096612 = sum of:
        0.07096612 = sum of:
          0.040352322 = weight(_text_:methods in 2937) [ClassicSimilarity], result of:
            0.040352322 = score(doc=2937,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.22209854 = fieldWeight in 2937, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2937)
          0.030613795 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
            0.030613795 = score(doc=2937,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.19345059 = fieldWeight in 2937, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2937)
      0.25 = coord(1/4)
    
    Abstract
    In authorship attribution, various distance-based metrics have been proposed to determine the most probable author of a disputed text. In this paradigm, a distance is computed between each author profile and the query text. These values are then employed only to rank the possible authors. In this article, we analyze their distribution and show that we can model it as a mixture of 2 Beta distributions. Based on this finding, we demonstrate how we can derive a more accurate probability that the closest author is, in fact, the real author. To evaluate this approach, we have chosen 4 authorship attribution methods (Burrows' Delta, Kullback-Leibler divergence, Labbé's intertextual distance, and the naïve Bayes). As the first test collection, we have downloaded 224 State of the Union addresses (from 1790 to 2014) delivered by 41 U.S. presidents. The second test collection is formed by the Federalist Papers. The evaluations indicate that the accuracy rate of some authorship decisions can be improved. The suggested method can signal that the proposed assignment should be interpreted as possible, without strong certainty. Being able to quantify the certainty associated with an authorship decision can be a useful component when important decisions must be taken.
    Date
    7. 5.2016 21:22:27
  7. Potha, N.; Stamatatos, E.: Improving author verification based on topic modeling (2019) 0.01
    0.011278818 = product of:
      0.045115273 = sum of:
        0.045115273 = product of:
          0.09023055 = sum of:
            0.09023055 = weight(_text_:methods in 5385) [ClassicSimilarity], result of:
              0.09023055 = score(doc=5385,freq=10.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4966275 = fieldWeight in 5385, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5385)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Authorship analysis attempts to reveal information about authors of digital documents enabling applications in digital humanities, text forensics, and cyber-security. Author verification is a fundamental task where, given a set of texts written by a certain author, we should decide whether another text is also by that author. In this article we systematically study the usefulness of topic modeling in author verification. We examine several author verification methods that cover the main paradigms, namely, intrinsic (attempt to solve a one-class classification task) and extrinsic (attempt to solve a binary classification task) methods as well as profile-based (all documents of known authorship are treated cumulatively) and instance-based (each document of known authorship is treated separately) approaches combined with well-known topic modeling methods such as Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA). We use benchmark data sets and demonstrate that LDA is better combined with extrinsic methods, while the most effective intrinsic method is based on LSI. Moreover, topic modeling seems to be particularly effective for profile-based approaches and the performance is enhanced when latent topics are extracted by an enriched set of documents. The comparison to state-of-the-art methods demonstrates the great potential of the approaches presented in this study. It is also demonstrates that even when genre-agnostic external documents are used, the proposed extrinsic models are very competitive.
  8. Juola, P.; Mikros, G.K.; Vinsick, S.: ¬A comparative assessment of the difficulty of authorship attribution in Greek and in English (2019) 0.01
    0.010088081 = product of:
      0.040352322 = sum of:
        0.040352322 = product of:
          0.080704644 = sum of:
            0.080704644 = weight(_text_:methods in 4676) [ClassicSimilarity], result of:
              0.080704644 = score(doc=4676,freq=8.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4441971 = fieldWeight in 4676, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4676)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Authorship attribution is an important problem in text classification, with many applications and a substantial body of research activity. Among the research findings are that many different methods will work, including a number of methods that are superficially language-independent (such as an analysis of the most common "words" or "character n-grams" in a document). Since all languages have words (and all written languages have characters), this method could (in theory) work on any language. However, it is not clear that the methods that work best on, for example English, would also work best on other languages. It is not even clear that the same level of performance is achievable in different languages, even under identical conditions. Unfortunately, it is very difficult to achieve "identical conditions" in practice. A new corpus, developed by George Mikros, provides very tight controls not only for author but also for topic, thus enabling a direct comparison of performance levels between the two languages Greek and English. We compare a number of different methods head-to-head on this corpus, and show that, overall, performance on English is higher than performance on Greek, often highly significantly so.
  9. Adamovic, S.; Miskovic, V.; Milosavljevic, M.; Sarac, M.; Veinovic, M.: Automated language-independent authorship verification (for Indo-European languages) : facilitating adaptive visual exploration of scientific publications by citation links (2019) 0.01
    0.010088081 = product of:
      0.040352322 = sum of:
        0.040352322 = product of:
          0.080704644 = sum of:
            0.080704644 = weight(_text_:methods in 5327) [ClassicSimilarity], result of:
              0.080704644 = score(doc=5327,freq=8.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4441971 = fieldWeight in 5327, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5327)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this article we examine automated language-independent authorship verification using text examples in several representative Indo-European languages, in cases when the examined texts belong to an open set of authors, that is, the author is unknown. We showcase the set of developed language-dependent and language-independent features, the model of training examples, consisting of pairs of equal features for known and unknown texts, and the appropriate method of authorship verification. An authorship verification accuracy greater than 90% was accomplished via the application of stylometric methods on four different languages (English, Greek, Spanish, and Dutch, while the verification for Dutch is slightly lower). For the multilingual case, the highest authorship verification accuracy using basic machine-learning methods, over 90%, was achieved by the application of the kNN and SVM-SMO methods, using the feature selection method SVM-RFE. The improvement in authorship verification accuracy in multilingual cases, over 94%, was accomplished via ensemble learning methods, with the MultiboostAB method being a bit more accurate, but Random Forest is generally more appropriate
  10. Christensen, A.: Next generation catalogues : what do users think? (2013) 0.01
    0.0085600205 = product of:
      0.034240082 = sum of:
        0.034240082 = product of:
          0.068480164 = sum of:
            0.068480164 = weight(_text_:methods in 1476) [ClassicSimilarity], result of:
              0.068480164 = score(doc=1476,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.37691376 = fieldWeight in 1476, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1476)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the wake of the digital revolution, libraries have started rethinking their catalogues and reshaping them along the lines that have been set by popular search engines and online retailers. Yet it has also become a hallmark of next­ generation catalogues to reflect the results of studies concerning user behaviour and user needs and to rely on the participation of users in the development and testing of the new tools. A wide array of methods for user­ driven design and development are being employed, which ideally leverage discovery platforms that reflect the specifics of library metadata and materials as well as the need for attractive design and useful new functionalities. After looking back at the history of user studies on online catalogues, we will briefly investigate methods to involve users actively in the design and development processes for new catalogues before describing and examining the outcomes of studies of users' perceptions.
  11. Niu, J.: Evolving landscape in name authority control (2013) 0.01
    0.008070464 = product of:
      0.032281857 = sum of:
        0.032281857 = product of:
          0.064563714 = sum of:
            0.064563714 = weight(_text_:methods in 1901) [ClassicSimilarity], result of:
              0.064563714 = score(doc=1901,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.35535768 = fieldWeight in 1901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1901)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article presents a conceptual framework for library name authority control, including methods for disambiguating agents that share the same name and for collocating works of agents who use multiple names. It then discusses the identifier solutions tried or proposed in the library community for name authority control, analyzes the various identity management systems emerging outside of the library community, and envisions future trends in name authority control.
  12. Wakeling, S.; Clough, P.; Connaway, L.S.; Sen, B.; Tomás, D.: Users and uses of a global union catalog : a mixed-methods study of WorldCat.org (2017) 0.01
    0.0071333502 = product of:
      0.028533401 = sum of:
        0.028533401 = product of:
          0.057066802 = sum of:
            0.057066802 = weight(_text_:methods in 3794) [ClassicSimilarity], result of:
              0.057066802 = score(doc=3794,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31409478 = fieldWeight in 3794, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3794)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents the first large-scale investigation of the users and uses of WorldCat.org, the world's largest bibliographic database and global union catalog. Using a mixed-methods approach involving focus group interviews with 120 participants, an online survey with 2,918 responses, and an analysis of transaction logs of approximately 15 million sessions from WorldCat.org, the study provides a new understanding of the context for global union catalog use. We find that WorldCat.org is accessed by a diverse population, with the three primary user groups being librarians, students, and academics. Use of the system is found to fall within three broad types of work-task (professional, academic, and leisure), and we also present an emergent taxonomy of search tasks that encompass known-item, unknown-item, and institutional information searches. Our results support the notion that union catalogs are primarily used for known-item searches, although the volume of traffic to WorldCat.org means that unknown-item searches nonetheless represent an estimated 250,000 sessions per month. Search engine referrals account for almost half of all traffic, but although WorldCat.org effectively connects users referred from institutional library catalogs to other libraries holding a sought item, users arriving from a search engine are less likely to connect to a library.
  13. Karaulova, M.; Gök, A.; Shapira, P.: Identifying author heritage using surname data : an application for Russian surnames (2019) 0.01
    0.0071333502 = product of:
      0.028533401 = sum of:
        0.028533401 = product of:
          0.057066802 = sum of:
            0.057066802 = weight(_text_:methods in 5223) [ClassicSimilarity], result of:
              0.057066802 = score(doc=5223,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31409478 = fieldWeight in 5223, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5223)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This research article puts forward a method to identify the national heritage of authors based on the morphology of their surnames. Most studies in the field use variants of dictionary-based surname methods to identify ethnic communities, an approach that suffers from methodological limitations. Using the public file of ORCID (Open Researcher and Contributor ID) identifiers in 2015, we developed a surname-based identification method and applied it to infer Russian heritage from suffix-based morphological regularities. The method was developed conceptually and tested in an undersampled control set. Identification based on surname morphology was then complemented by using first-name data to eliminate false-positive results. The method achieved 98% precision and 94% recall rates-superior to most other methods that use name data. The procedure can be adapted to identify the heritage of a variety of national groups with morphologically regular naming traditions. We elaborate on how the method can be employed to overcome long-standing limitations of using name data in bibliometric datasets. This identification method can contribute to advancing research in scientific mobility and migration, patenting by certain groups, publishing and collaboration, transnational and scientific diaspora links, and the effects of diversity on the innovative performance of organizations, regions, and countries.
  14. Calhoun, K.: Supporting digital scholarship : bibliographic control, library co-operatives and open access repositories (2013) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 1482) [ClassicSimilarity], result of:
              0.056493253 = score(doc=1482,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 1482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1482)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Research libraries have entered an era of discontinuous change-a time when the cumulated assets of the past do not guarantee future success. Bibliographic control, cooperative cataloguing systems and library catalogues have been key assets in the research library service framework for supporting scholarship. This chapter examines these assets in the context of changing library collections, new metadata sources and methods, open access repositories, digital scholarship and the purposes of research libraries. Advocating a fundamental rethinking of the research library service framework, the chapter concludes with a call for research libraries to collectively consider new approaches that could strengthen their roles as essential contributors to emergent, network-level scholarly research infrastructures.
  15. Polidoro, P.: Using qualitative methods to analyze online catalog interfaces (2015) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 1879) [ClassicSimilarity], result of:
              0.056493253 = score(doc=1879,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 1879, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1879)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  16. O'Dell, A.J.: How much does it cost to catalog a document? : a case study in Estonian university libraries (2015) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 2616) [ClassicSimilarity], result of:
              0.056493253 = score(doc=2616,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 2616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2616)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the current socioeconomic climate, efficiency and performance have become very important in libraries. The need for library managers to justify their costs to their parent organizations has become particularly important. Time-driven activity-based costing (TDABC) helps libraries to get a better picture of the cataloging activities that they are actually engaged in, and their costs. This article reviews the relevant literature to provide an overview of different cost accounting methods suitable for the measurement of the cataloging process. Then, through a case study conducted among Estonian university libraries, the TDABC approach was used to analyze the activities of cataloging process in two university libraries.
  17. Terrill, L.J.: ¬The state of cataloging research : an analysis of peer-reviewed journal literature, 2010-2014 (2016) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 5137) [ClassicSimilarity], result of:
              0.056493253 = score(doc=5137,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 5137, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5137)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The importance of cataloging research was highlighted by a resolution declaring 2010 as "The Year of Cataloging Research." This study of the peer-reviewed journal literature from 2010 to 2014 examined the state of cataloging literature since this proclamation. The goals were to determine the percentage of cataloging literature that can be classified as research, what research methods were used, and whether the articles contributed to the library assessment conversation. Nearly a quarter of the cataloging literature qualifies as research; however, a majority of researchers fail to make explicit connections between their work and the missions of their libraries.
  18. DuBose, J.: Russian, Japanese, and Latin oh my! : using technology to catalog non-english language titles (2019) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 5748) [ClassicSimilarity], result of:
              0.056493253 = score(doc=5748,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 5748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5748)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Nearly every library where the dominant language is English also has materials that are written in other languages. These materials can present unique challenges for catalogers. Many non-English language materials are located in the array of collections of the Special Collection Department of Mississippi State University (MSU). To properly process and catalog these materials, the cataloger used online tools which provided a greater understanding of the materials, allowing a higher cataloging standard. The author discusses the various tools and methods that were used to catalog these materials.
  19. Noruzi, A.: FRBR and Tillett's taxonomy of bibliographic relationships (2012) 0.01
    0.006122759 = product of:
      0.024491036 = sum of:
        0.024491036 = product of:
          0.048982073 = sum of:
            0.048982073 = weight(_text_:22 in 4564) [ClassicSimilarity], result of:
              0.048982073 = score(doc=4564,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.30952093 = fieldWeight in 4564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4564)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2013 11:13:52
  20. Hamm, S.; Schneider, K.: Automatische Erschließung von Universitätsdissertationen (2015) 0.01
    0.006122759 = product of:
      0.024491036 = sum of:
        0.024491036 = product of:
          0.048982073 = sum of:
            0.048982073 = weight(_text_:22 in 1715) [ClassicSimilarity], result of:
              0.048982073 = score(doc=1715,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.30952093 = fieldWeight in 1715, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1715)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Dialog mit Bibliotheken. 27(2015) H.1, S.18-22