Search (84 results, page 1 of 5)

  • × language_ss:"e"
  • × theme_ss:"Formalerschließung"
  • × year_i:[2010 TO 2020}
  1. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.14
    0.14154318 = product of:
      0.28308636 = sum of:
        0.19869895 = weight(_text_:markup in 2606) [ClassicSimilarity], result of:
          0.19869895 = score(doc=2606,freq=4.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.7189135 = fieldWeight in 2606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.08438742 = product of:
          0.12658113 = sum of:
            0.08670129 = weight(_text_:language in 2606) [ClassicSimilarity], result of:
              0.08670129 = score(doc=2606,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5255505 = fieldWeight in 2606, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
            0.039879844 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.039879844 = score(doc=2606,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  2. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.07
    0.07167879 = product of:
      0.14335757 = sum of:
        0.10035812 = weight(_text_:markup in 3373) [ClassicSimilarity], result of:
          0.10035812 = score(doc=3373,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.36310613 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.042999458 = product of:
          0.064499184 = sum of:
            0.03575501 = weight(_text_:language in 3373) [ClassicSimilarity], result of:
              0.03575501 = score(doc=3373,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
            0.028744178 = weight(_text_:29 in 3373) [ClassicSimilarity], result of:
              0.028744178 = score(doc=3373,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.19432661 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
    Date
    31. 1.2017 13:29:56
  3. Adamovic, S.; Miskovic, V.; Milosavljevic, M.; Sarac, M.; Veinovic, M.: Automated language-independent authorship verification (for Indo-European languages) : facilitating adaptive visual exploration of scientific publications by citation links (2019) 0.02
    0.016709033 = product of:
      0.06683613 = sum of:
        0.06683613 = product of:
          0.10025419 = sum of:
            0.07151002 = weight(_text_:language in 5327) [ClassicSimilarity], result of:
              0.07151002 = score(doc=5327,freq=8.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.4334667 = fieldWeight in 5327, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5327)
            0.028744178 = weight(_text_:29 in 5327) [ClassicSimilarity], result of:
              0.028744178 = score(doc=5327,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.19432661 = fieldWeight in 5327, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5327)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    In this article we examine automated language-independent authorship verification using text examples in several representative Indo-European languages, in cases when the examined texts belong to an open set of authors, that is, the author is unknown. We showcase the set of developed language-dependent and language-independent features, the model of training examples, consisting of pairs of equal features for known and unknown texts, and the appropriate method of authorship verification. An authorship verification accuracy greater than 90% was accomplished via the application of stylometric methods on four different languages (English, Greek, Spanish, and Dutch, while the verification for Dutch is slightly lower). For the multilingual case, the highest authorship verification accuracy using basic machine-learning methods, over 90%, was achieved by the application of the kNN and SVM-SMO methods, using the feature selection method SVM-RFE. The improvement in authorship verification accuracy in multilingual cases, over 94%, was accomplished via ensemble learning methods, with the MultiboostAB method being a bit more accurate, but Random Forest is generally more appropriate
    Date
    7. 7.2019 11:29:43
  4. Leong, J.H.-t.: ¬The convergence of metadata and bibliographic control? : trends and patterns in addressing the current issues and challenges of providing subject access (2010) 0.02
    0.01586188 = product of:
      0.06344752 = sum of:
        0.06344752 = product of:
          0.09517127 = sum of:
            0.06067826 = weight(_text_:language in 3355) [ClassicSimilarity], result of:
              0.06067826 = score(doc=3355,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3678087 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
            0.03449301 = weight(_text_:29 in 3355) [ClassicSimilarity], result of:
              0.03449301 = score(doc=3355,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23319192 = fieldWeight in 3355, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Resource description and discovery have been facilitated generally in two approaches, namely bibliographic control and metadata, which now may converge in response to current issues and challenges of providing subject access. Four categories of major issues and challenges in the provision of subject access to digital and non-digital resources are: 1) the advancement of new knowledge; 2) the fall of controlled vocabulary and the rise of natural language; 3) digitizing and networking the traditional catalogue systems; and 4) electronic publishing and the Internet. The creation of new knowledge and the debate about the use of natural language and controlled vocabulary as subject headings becomes even more intense in the digital and online environment. The third and fourth categories are conceived after the emergence of networked environments and the rapid expansion of electronic resources. Recognizing the convergence of metadata schemas and bibliographic control calls for adapting to the new environment by developing tools that exploit the strengths of both.
    Source
    Knowledge organization. 37(2010) no.1, S.29-42
  5. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.02
    0.015810166 = product of:
      0.06324066 = sum of:
        0.06324066 = product of:
          0.094860986 = sum of:
            0.06067826 = weight(_text_:language in 4199) [ClassicSimilarity], result of:
              0.06067826 = score(doc=4199,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3678087 = fieldWeight in 4199, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
            0.034182724 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
              0.034182724 = score(doc=4199,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23214069 = fieldWeight in 4199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Educational standards are a central focus of the current educational system in the United States, underpinning educational practice, curriculum design, teacher professional development, and high-stakes testing and assessment. Digital library users have requested that this information be accessible in association with digital learning resources to support teaching and learning as well as accountability requirements. Providing this information is complex because of the variability and number of standards documents in use at the national, state, and local level. This article describes a cataloging tool that aids catalogers in the assignment of standards metadata to digital library resources, using natural language processing techniques. The research explores whether the standards suggestor service would suggest the same standards as a human, whether relevant standards are ranked appropriately in the result set, and whether the relevance of the suggested assignments improve when, in addition to resource content, metadata is included in the query to the cataloging tool. The article also discusses how this service might streamline the cataloging workflow.
    Date
    22. 1.2011 14:25:32
  6. Diao, J.: "Fu hao," "fu hao," "fuHao," or "fu Hao"? : a cataloger's navigation of an ancient Chinese woman's name (2015) 0.02
    0.01504981 = product of:
      0.06019924 = sum of:
        0.06019924 = product of:
          0.090298854 = sum of:
            0.05005701 = weight(_text_:language in 2009) [ClassicSimilarity], result of:
              0.05005701 = score(doc=2009,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30342668 = fieldWeight in 2009, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2009)
            0.040241845 = weight(_text_:29 in 2009) [ClassicSimilarity], result of:
              0.040241845 = score(doc=2009,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27205724 = fieldWeight in 2009, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2009)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Chinese language catalogers' work is not only challenged by the revolution in cataloging standards and principles, but also by ancient Chinese names that emerged in archaeological discoveries and Chinese classic texts, which create a significant impact on bibliographic description and retrieval in terms of consistency and accuracy. This article takes an example of one ancient Chinese lady's name that is inconsistently romanized and described in OCLC to explore the reasons that cause the name variations and to propose an appropriate authorized access point after consulting both Western and Eastern scholarly practices. This article investigates the evolving history of pre-Qin Chinese names that are not addressed or exemplified in the Library of Congress Romanization Table, and recommends a revision of that Table.
    Date
    31. 5.2015 9:29:18
  7. Theimer, S.: ¬A cataloger's resolution to become more creative : how and why (2012) 0.01
    0.013353615 = product of:
      0.05341446 = sum of:
        0.05341446 = product of:
          0.08012169 = sum of:
            0.040241845 = weight(_text_:29 in 1934) [ClassicSimilarity], result of:
              0.040241845 = score(doc=1934,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27205724 = fieldWeight in 1934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1934)
            0.039879844 = weight(_text_:22 in 1934) [ClassicSimilarity], result of:
              0.039879844 = score(doc=1934,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.2708308 = fieldWeight in 1934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1934)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 5.2015 11:08:22
  8. Martin, K.E.; Mundle, K.: Positioning libraries for a new bibliographic universe (2014) 0.01
    0.012848122 = product of:
      0.05139249 = sum of:
        0.05139249 = product of:
          0.07708873 = sum of:
            0.042906005 = weight(_text_:language in 2608) [ClassicSimilarity], result of:
              0.042906005 = score(doc=2608,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 2608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2608)
            0.034182724 = weight(_text_:22 in 2608) [ClassicSimilarity], result of:
              0.034182724 = score(doc=2608,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23214069 = fieldWeight in 2608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2608)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper surveys the English-language literature on cataloging and classification published during 2011 and 2012, covering both theory and application. A major theme of the literature centered on Resource Description and Access (RDA), as the period covered in this review includes the conclusion of the RDA test, revisions to RDA, and the implementation decision. Explorations in the theory and practical applications of the Functional Requirements for Bibliographic Records (FRBR), upon which RDA is organized, are also heavily represented. Library involvement with linked data through the creation of prototypes and vocabularies are explored further during the period. Other areas covered in the review include: classification, controlled vocabularies and name authority, evaluation and history of cataloging, special formats cataloging, cataloging and discovery services, non-AACR2/RDA metadata, cataloging workflows, and the education and careers of catalogers.
    Date
    10. 9.2000 17:38:22
  9. Delsey, T.: ¬The Making of RDA (2016) 0.01
    0.012848122 = product of:
      0.05139249 = sum of:
        0.05139249 = product of:
          0.07708873 = sum of:
            0.042906005 = weight(_text_:language in 2946) [ClassicSimilarity], result of:
              0.042906005 = score(doc=2946,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
            0.034182724 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.034182724 = score(doc=2946,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  10. Tillett, B.B.: Complementarity of perspectives for resource descriptions (2015) 0.01
    0.010749864 = product of:
      0.042999458 = sum of:
        0.042999458 = product of:
          0.064499184 = sum of:
            0.03575501 = weight(_text_:language in 2288) [ClassicSimilarity], result of:
              0.03575501 = score(doc=2288,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 2288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2288)
            0.028744178 = weight(_text_:29 in 2288) [ClassicSimilarity], result of:
              0.028744178 = score(doc=2288,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.19432661 = fieldWeight in 2288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2288)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Bibliographic data is used to describe resources held in the collections of libraries, archives and museums. That data is mostly available on the Web today and mostly as linked data. Also on the Web are the controlled vocabulary systems of name authority files, like the Virtual International Authority File (VIAF), classification systems, and subject terms. These systems offer their own linked data to potentially help users find the information they want - whether at their local library or anywhere in the world that is willing to make their resources available. We have found it beneficial to merge authority data for names on a global level, as the entities are relatively clear. That is not true for subject concepts and terminology that have categorisation systems developed according to varying principles and schemes and are in multiple languages. Rather than requiring everyone in the world to use the same categorisation/classification system in the same language, we know that the Web offers us the opportunity to add descriptors assigned around the world using multiple systems from multiple perspectives to identify our resources. Those descriptors add value to refine searches, help users worldwide and share globally what each library does locally.
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  11. Rigby, C.: Nunavut libraries online establish Inuit language bibliographic cataloging standards : promoting indigenous language using a commercial ILS (2015) 0.01
    0.0072251074 = product of:
      0.02890043 = sum of:
        0.02890043 = product of:
          0.08670129 = sum of:
            0.08670129 = weight(_text_:language in 2182) [ClassicSimilarity], result of:
              0.08670129 = score(doc=2182,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5255505 = fieldWeight in 2182, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2182)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article examines shared cataloging practices in Nunavut, Canada, where Inuit form 85% of the general population and three official languages, including Inuit language (Inuktitut/Inuinnaqtun), English and French, are used in government and daily discourse. The partners in the Nunavut Libraries Online consortium, together with the Nunavut Government translation bureau, have developed a common vocabulary for creating bibliographic records in Inuktitut, including syllabic script, and used this to create bibliographic cataloging standards, under the Anglo-American Cataloguing Rules, Second Edition, for creating multilingual and multiscript MARC-compliant, Integrated Library System-compatible records that accurately reflect the multilingual content of material published in and about Nunavut and Inuit.
  12. DuBose, J.: Russian, Japanese, and Latin oh my! : using technology to catalog non-english language titles (2019) 0.01
    0.0072251074 = product of:
      0.02890043 = sum of:
        0.02890043 = product of:
          0.08670129 = sum of:
            0.08670129 = weight(_text_:language in 5748) [ClassicSimilarity], result of:
              0.08670129 = score(doc=5748,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5255505 = fieldWeight in 5748, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5748)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Nearly every library where the dominant language is English also has materials that are written in other languages. These materials can present unique challenges for catalogers. Many non-English language materials are located in the array of collections of the Special Collection Department of Mississippi State University (MSU). To properly process and catalog these materials, the cataloger used online tools which provided a greater understanding of the materials, allowing a higher cataloging standard. The author discusses the various tools and methods that were used to catalog these materials.
  13. Gentili-Tedeschi, M.: Music presentation format : toward a cataloging babel? (2015) 0.01
    0.005056522 = product of:
      0.020226087 = sum of:
        0.020226087 = product of:
          0.06067826 = sum of:
            0.06067826 = weight(_text_:language in 1885) [ClassicSimilarity], result of:
              0.06067826 = score(doc=1885,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3678087 = fieldWeight in 1885, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1885)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This case study on cataloging notated music focuses on music presentation format, and the use of controlled vocabularies in a multilingual context, when concepts do not have corresponding terms in one or more languages, and when common language terms are mixed with technical terms in a specialized context. Issues concern the terminological correspondence among different languages, and the consequent risks if only one language is taken into account or the meaning of one word is arbitrarily altered; English linguistic pragmatism may lead to wrong conceptual results when it points directly to the result of a process, while other languages focus on the process needed to obtain that result. Considerations on the use of codes in MARC formats and on how music presentation is treated in Functional Requirements for Bibliographic Records (FRBR) are included, and numerous illustrated examples, understandable even by non-music experts, support the article.
  14. Nuttall, F.X.; Oh, S.G.: Party identifiers (2011) 0.00
    0.004767334 = product of:
      0.019069336 = sum of:
        0.019069336 = product of:
          0.05720801 = sum of:
            0.05720801 = weight(_text_:language in 1898) [ClassicSimilarity], result of:
              0.05720801 = score(doc=1898,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.34677336 = fieldWeight in 1898, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1898)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    As Digital Media develops into a mature market, the proper referencing of digital content is increasingly critical. The Identification of Parties who contributes to content is key to ensure efficient discovery services and royalty tracking. Far from being simple numbers, Party Identifiers such as ISNI, are built on rigorous structures meeting the requirements of diverse media such as books, music or films. Designed to accurately identify Natural Persons and Legal Entities alike, Party Identifiers must also support language variances, cultural diversity and stringent data privacy regulations.
  15. Juola, P.; Mikros, G.K.; Vinsick, S.: ¬A comparative assessment of the difficulty of authorship attribution in Greek and in English (2019) 0.00
    0.004213768 = product of:
      0.016855072 = sum of:
        0.016855072 = product of:
          0.050565217 = sum of:
            0.050565217 = weight(_text_:language in 4676) [ClassicSimilarity], result of:
              0.050565217 = score(doc=4676,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30650726 = fieldWeight in 4676, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4676)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Authorship attribution is an important problem in text classification, with many applications and a substantial body of research activity. Among the research findings are that many different methods will work, including a number of methods that are superficially language-independent (such as an analysis of the most common "words" or "character n-grams" in a document). Since all languages have words (and all written languages have characters), this method could (in theory) work on any language. However, it is not clear that the methods that work best on, for example English, would also work best on other languages. It is not even clear that the same level of performance is achievable in different languages, even under identical conditions. Unfortunately, it is very difficult to achieve "identical conditions" in practice. A new corpus, developed by George Mikros, provides very tight controls not only for author but also for topic, thus enabling a direct comparison of performance levels between the two languages Greek and English. We compare a number of different methods head-to-head on this corpus, and show that, overall, performance on English is higher than performance on Greek, often highly significantly so.
  16. Taylor, S.; Jacobi, K.; Knight, E.; Foster, D.: Cataloging in a remote location : a case study of international collaboration in the Galapagos Islands (2013) 0.00
    0.0041714176 = product of:
      0.01668567 = sum of:
        0.01668567 = product of:
          0.05005701 = sum of:
            0.05005701 = weight(_text_:language in 1943) [ClassicSimilarity], result of:
              0.05005701 = score(doc=1943,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30342668 = fieldWeight in 1943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1943)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The Corley Smith Library is a small, special library located at the Charles Darwin Research Station in the Galapagos Islands. Currently, the library is managed by international volunteer librarians in collaboration with Station staff and local volunteers. Recently the library migrated its online public access catalog to Koha. We describe the process of selecting an open-source integrated library system and implementing Koha. Cataloging in this remote location presents challenges related to technology, staff expertise, language, local practices, and obtaining supplies. We define the strategies to address these issues, including long-term goals of copy cataloging with Z39.50 and remote cataloging by volunteer librarians.
  17. Piscitelli, F.A.: When does the forename end and the surname begin? : saints' names as compound forenames in Spanish (2019) 0.00
    0.0041714176 = product of:
      0.01668567 = sum of:
        0.01668567 = product of:
          0.05005701 = sum of:
            0.05005701 = weight(_text_:language in 5276) [ClassicSimilarity], result of:
              0.05005701 = score(doc=5276,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30342668 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5276)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    While cataloging colonial-era Spanish-language materials, the investigator encountered personal names in which the forename, given in honor of a saint, includes a phrase-like qualifier such as a place name or attribute. In these situations, catalogers occasionally mistake the qualifier as part of the surname. Cataloging rules provide guidance in establishing compound surnames but not so much with forenames. For this article, 28 such forenames were searched in the Library of Congress Name Authority File to identify problematic authorized access points. Familiarity with naming customs in Spanish-speaking societies and with saints' names is needed when creating or revising these access points.
  18. Beall, J.: Abbreviations, full spellings, and searchers' preferences (2011) 0.00
    0.003832557 = product of:
      0.015330228 = sum of:
        0.015330228 = product of:
          0.045990683 = sum of:
            0.045990683 = weight(_text_:29 in 4166) [ClassicSimilarity], result of:
              0.045990683 = score(doc=4166,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.31092256 = fieldWeight in 4166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4166)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    25. 5.2015 18:29:49
  19. Richert, N.: Authors in the Mathematical Reviews/MathSciNet database (2011) 0.00
    0.003832557 = product of:
      0.015330228 = sum of:
        0.015330228 = product of:
          0.045990683 = sum of:
            0.045990683 = weight(_text_:29 in 1895) [ClassicSimilarity], result of:
              0.045990683 = score(doc=1895,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.31092256 = fieldWeight in 1895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1895)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    25. 5.2015 18:29:24
  20. Niu, J.: Evolving landscape in name authority control (2013) 0.00
    0.003832557 = product of:
      0.015330228 = sum of:
        0.015330228 = product of:
          0.045990683 = sum of:
            0.045990683 = weight(_text_:29 in 1901) [ClassicSimilarity], result of:
              0.045990683 = score(doc=1901,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.31092256 = fieldWeight in 1901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1901)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    29. 5.2015 13:20:17

Types

  • a 79
  • el 6
  • b 4
  • m 3
  • n 1
  • r 1
  • More… Less…