Search (106 results, page 1 of 6)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Lee, W.-C.: Conflicts of semantic warrants in cataloging practices (2017) 0.11
    0.10819927 = product of:
      0.1442657 = sum of:
        0.008582841 = weight(_text_:information in 3871) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3871,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
        0.1106488 = weight(_text_:standards in 3871) [ClassicSimilarity], result of:
          0.1106488 = score(doc=3871,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 3871, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
        0.025034059 = product of:
          0.050068118 = sum of:
            0.050068118 = weight(_text_:organization in 3871) [ClassicSimilarity], result of:
              0.050068118 = score(doc=3871,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27854347 = fieldWeight in 3871, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3871)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This study presents preliminary themes surfaced from an ongoing ethnographic study. The research question is: how and where do cultures influence the cataloging practices of using U.S. standards to catalog Chinese materials? The author applies warrant as a lens for evaluating knowledge representation systems, and extends the application from examining classificatory decisions to cataloging decisions. Semantic warrant as a conceptual tool allows us to recognize and name the various rationales behind cataloging decisions, grants us explanatory power, and the language to "visualize" and reflect on the conflicting priorities in cataloging practices. Through participatory observation, the author recorded the cataloging practices of two Chinese catalogers working on the same cataloging project. One of the catalogers is U.S. trained, and another cataloger is a professor of Library and Information Science from China, who is also a subject expert and a cataloger of Chinese special collections. The study shows how the catalogers describe Chinese special collections using many U.S. cataloging and classification standards but from different approaches. The author presents particular cases derived from the fieldwork, with an emphasis on the many layers presented by cultures, principles, standards, and practices of different scope, each of which may represent conflicting warrants. From this, it is made clear that the conflicts of warrants influence cataloging practice. We may view the conflicting warrants as an interpretation of the tension between different semantic warrants and the globalization and localization of cataloging standards.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  2. Escolano Rodrìguez, E.: RDA e ISBD : history of a relationship (2016) 0.08
    0.07701033 = product of:
      0.15402067 = sum of:
        0.13277857 = weight(_text_:standards in 2951) [ClassicSimilarity], result of:
          0.13277857 = score(doc=2951,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.59091425 = fieldWeight in 2951, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=2951)
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 2951) [ClassicSimilarity], result of:
              0.042484205 = score(doc=2951,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 2951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2951)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article attempts to clarify the nature of the relationship between the RDA and ISBD standards in order to be able to understand their differences and vinculations, as well as to remove some misinterpretations about this relationship. With this objective, some aspects that can affect their differences, such as types of standards, points of view, scope, origin, policies of the creation and development group or organization in charge that logically justify these differences, are analyzed. These have not presented any obstacles for a correct relationship with the help of the Linked Data technology. In this article, account is also given of the work done of mappings and alignments between the standards in order to contribute properly to the Semantic Web. This knowledge is the one fundamental required for current catalogers to use standards judiciously, knowledgeably and responsibly.
  3. Dobreski, B.: Authority and universalism : conventional values in descriptive catalog codes (2017) 0.08
    0.07589395 = product of:
      0.1517879 = sum of:
        0.117099695 = weight(_text_:standards in 3876) [ClassicSimilarity], result of:
          0.117099695 = score(doc=3876,freq=14.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.5211374 = fieldWeight in 3876, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=3876)
        0.03468821 = product of:
          0.06937642 = sum of:
            0.06937642 = weight(_text_:organization in 3876) [ClassicSimilarity], result of:
              0.06937642 = score(doc=3876,freq=12.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38596115 = fieldWeight in 3876, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3876)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Every standard embodies a particular set of values. Some aspects are privileged while others are masked. Values embedded within knowledge organization standards have special import in that they are further perpetuated by the data they are used to generate. Within libraries, descriptive catalog codes serve as prominent knowledge organization standards, guiding the creation of resource representations. Though the historical and functional aspects of these standards have received significant attention, less focus has been placed on the values associated with such codes. In this study, a critical, historical analysis of ten Anglo-American descriptive catalog codes and surrounding discourse was conducted as an initial step towards uncovering key values associated with this lineage of standards. Two values in particular were found to be highly significant: authority and universalism. Authority is closely tied to notions of power and control, particularly over practice or belief. Increasing control over resources, identities, and viewpoints are all manifestations of the value of authority within descriptive codes. Universalism has guided the widening coverage of descriptive codes in regards to settings and materials, such as the extension of bibliographic standards to non-book resources. Together, authority and universalism represent conventional values focused on facilitating orderly social exchanges. A comparative lack of emphasis on values concerning human welfare and empowerment may be unsurprising, but raises questions concerning the role of human values in knowledge organization standards. Further attention to the values associated with descriptive codes and other knowledge organization standards is important as libraries and other institutions seek to share their resource representation data more widely
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  4. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.05
    0.051098473 = product of:
      0.0681313 = sum of:
        0.009710376 = weight(_text_:information in 311) [ClassicSimilarity], result of:
          0.009710376 = score(doc=311,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.044259522 = weight(_text_:standards in 311) [ClassicSimilarity], result of:
          0.044259522 = score(doc=311,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.014161401 = product of:
          0.028322803 = sum of:
            0.028322803 = weight(_text_:organization in 311) [ClassicSimilarity], result of:
              0.028322803 = score(doc=311,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.15756798 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=311)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
  5. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.05
    0.04765854 = product of:
      0.09531708 = sum of:
        0.07824052 = weight(_text_:standards in 4553) [ClassicSimilarity], result of:
          0.07824052 = score(doc=4553,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 4553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4553,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  6. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.03
    0.033731185 = product of:
      0.06746237 = sum of:
        0.01213797 = weight(_text_:information in 3373) [ClassicSimilarity], result of:
          0.01213797 = score(doc=3373,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 3373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.0553244 = weight(_text_:standards in 3373) [ClassicSimilarity], result of:
          0.0553244 = score(doc=3373,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
      0.5 = coord(2/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  7. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.03
    0.03148804 = product of:
      0.12595215 = sum of:
        0.12595215 = sum of:
          0.08496841 = weight(_text_:organization in 3082) [ClassicSimilarity], result of:
            0.08496841 = score(doc=3082,freq=8.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.47270393 = fieldWeight in 3082, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
          0.04098374 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
            0.04098374 = score(doc=3082,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.23214069 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Pages
    S.15-22
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  8. Takhirov, N.; Aalberg, T.; Duchateau, F.; Zumer, M.: FRBR-ML: a FRBR-based framework for semantic interoperability (2012) 0.03
    0.028076127 = product of:
      0.056152254 = sum of:
        0.011892734 = weight(_text_:information in 134) [ClassicSimilarity], result of:
          0.011892734 = score(doc=134,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1343758 = fieldWeight in 134, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
        0.044259522 = weight(_text_:standards in 134) [ClassicSimilarity], result of:
          0.044259522 = score(doc=134,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Abstract
    Metadata related to cultural items such as literature, music and movies is a valuable resource that is currently exploited in many applications and services based on semantic web technologies. A vast amount of such information has been created by memory institutions in the last decades using different standard or ad hoc schemas, and a main challenge is to make this legacy data accessible as reusable semantic data. On one hand, this is a syntactic problem that can be solved by transforming to formats that are compatible with the tools and services used for semantic aware services. On the other hand, this is a semantic problem. Simply transforming from one format to another does not automatically enable semantic interoperability and legacy data often needs to be reinterpreted as well as transformed. The conceptual model in the Functional Requirements for Bibliographic Records, initially developed as a conceptual framework for library standards and systems, is a major step towards a shared semantic model of the products of artistic and intellectual endeavor of mankind. The model is generally accepted as sufficiently generic to serve as a conceptual framework for a broad range of cultural heritage metadata. Unfortunately, the existing large body of legacy data makes a transition to this model difficult. For instance, most bibliographic data is still only available in various MARC-based formats which is hard to render into reusable and meaningful semantic data. Making legacy bibliographic data accessible as semantic data is a complex problem that includes interpreting and transforming the information. In this article, we present our work on transforming and enhancing legacy bibliographic information into a representation where the structure and semantics of the FRBR model is explicit.
  9. Guerrini, M.: Cataloguing based on bibliographic axiology (2010) 0.03
    0.027315754 = product of:
      0.05463151 = sum of:
        0.017839102 = weight(_text_:information in 2624) [ClassicSimilarity], result of:
          0.017839102 = score(doc=2624,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 2624, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2624)
        0.036792405 = product of:
          0.07358481 = sum of:
            0.07358481 = weight(_text_:organization in 2624) [ClassicSimilarity], result of:
              0.07358481 = score(doc=2624,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.40937364 = fieldWeight in 2624, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2624)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The article presents the work of Elaine Svenonius The Intellectual Foundation of Information Organization, translated in Italian and published by Le Lettere of Florence, within the series Pinakes, with the title Il fondamento intellettuale dell'organizzazione dell'informazione. The Intellectual Foundation of Information Organization defines the theoretical aspects of library science, its philosophical basics and principles, the purposes that must be kept in mind, abstracting from the technology used in a library. The book deals with information organization and bibliographic universe, in particular using the bibliographic entities defined in FRBR, at first. Then, it analyzes all the specific languages by which works and subjects are treated. This work, already acknowledged as a classic, organizes, synthesizes and make easily understood the whole complex of knowledge, practices and procedures developed in the last 150 years.
  10. Smiraglia, R.P.: Facets as discourse in knowledge organization : a case study in LISTA (2017) 0.03
    0.026284594 = product of:
      0.05256919 = sum of:
        0.017165681 = weight(_text_:information in 3855) [ClassicSimilarity], result of:
          0.017165681 = score(doc=3855,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 3855, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3855)
        0.035403505 = product of:
          0.07080701 = sum of:
            0.07080701 = weight(_text_:organization in 3855) [ClassicSimilarity], result of:
              0.07080701 = score(doc=3855,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.39391994 = fieldWeight in 3855, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3855)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge Organization Systems (KOSs) use arrays of related concepts to capture the ontological content of a domain; hierarchical structures are typical of such systems. Some KOSs also employ sets of crossconceptual descriptors that express different dimensions within a domain-facets. The recent increase in the prominence of facets and faceted systems has had major impact on the intension of the KO domain and this is visible in the domain's literature. An interesting question is how the discourse surrounding facets in KO and in related domains such as information science might be described. The present paper reports one case study in an ongoing research project to investigate the discourse of facets in KO. In this particular case, the formal current research literature represented by inclusion in the "Library, Information Science & Technology Abstracts, Full Text" (LISTA) database is analyzed to discover aspects of the research front and its ongoing discourse concerning facets. A datasets of 1682 citations was analyzed. Results show thinking concerning information retrieval and the semantic web resides alongside implementation of faceted searching and the growth of faceted thesauri. Faceted classification remains important to the discourse, but the use of facet analysis is linked directly to applied aspects of information science.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  11. Zhao, Y.; Ma, F.; Xia, X.: Evaluating the coverage of entities in knowledge graphs behind general web search engines : Poster (2017) 0.02
    0.021993173 = product of:
      0.043986347 = sum of:
        0.008582841 = weight(_text_:information in 3854) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3854,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3854)
        0.035403505 = product of:
          0.07080701 = sum of:
            0.07080701 = weight(_text_:organization in 3854) [ClassicSimilarity], result of:
              0.07080701 = score(doc=3854,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.39391994 = fieldWeight in 3854, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3854)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Web search engines, such as Google and Bing, are constantly employing results from knowledge organization and various visualization features to improve their search services. Knowledge graph, a large repository of structured knowledge represented by formal languages such as RDF (Resource Description Framework), is used to support entity search feature of Google and Bing (Demartini, 2016). When a user searchs for an entity, such as a person, an organization, or a place in Google or Bing, it is likely that a knowledge cardwill be presented on the right side bar of the search engine result pages (SERPs). For example, when a user searches the entity Benedict Cumberbatch on Google, the knowledge card will show the basic structured information about this person, including his date of birth, height, spouse, parents, and his movies, etc. The knowledge card, which is used to present the result of entity search, is generated from knowledge graphs. Therefore, the quality of knowledge graphs is essential to the performance of entity search. However, studies on the quality of knowledge graphs from the angle of entity coverage are scant in the literature. This study aims to investigate the coverage of entities of knowledge graphs behind Google and Bing.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  12. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.02
    0.018423056 = product of:
      0.036846112 = sum of:
        0.016818866 = weight(_text_:information in 3109) [ClassicSimilarity], result of:
          0.016818866 = score(doc=3109,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19003606 = fieldWeight in 3109, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.020027246 = product of:
          0.040054493 = sum of:
            0.040054493 = weight(_text_:organization in 3109) [ClassicSimilarity], result of:
              0.040054493 = score(doc=3109,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.22283478 = fieldWeight in 3109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  13. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.02
    0.018399216 = product of:
      0.036798432 = sum of:
        0.012015978 = weight(_text_:information in 267) [ClassicSimilarity], result of:
          0.012015978 = score(doc=267,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
        0.024782453 = product of:
          0.049564905 = sum of:
            0.049564905 = weight(_text_:organization in 267) [ClassicSimilarity], result of:
              0.049564905 = score(doc=267,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27574396 = fieldWeight in 267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
    Source
    Journal of Intelligent Information Systems
  14. Chaves Guimarães, J.A.; Pinho, F.A.; Martínez-Ávila, D.; Campbell, D.G.; Nascimento, F.A.: Knowledge organization and the power to name : LGBTQ terminology and the polyhedron of empowerment (2017) 0.02
    0.017594539 = product of:
      0.035189077 = sum of:
        0.006866273 = weight(_text_:information in 3873) [ClassicSimilarity], result of:
          0.006866273 = score(doc=3873,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 3873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3873)
        0.028322803 = product of:
          0.056645606 = sum of:
            0.056645606 = weight(_text_:organization in 3873) [ClassicSimilarity], result of:
              0.056645606 = score(doc=3873,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.31513596 = fieldWeight in 3873, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3873)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper uses Hope Olson's concept of "the power to name" to explore the terminological practices of the LGBTQ community in the Cariri region of Brazil in the years between 2006 and 2013. LGBTQ communities can seize back the "power to name," traditionally exerted by a heteronormative society upon marginalized groups, by organizing their cultural and practical knowledge from within, and by exercising the power to name themselves and their specific domains and cultural practices. The study showed that knowledge organization - the act of defining entities and categories and assigning specific names to them - is a gesture of self-empowerment on many different levels. The "power of self-naming" in this LGBTQ community is a polyhedron in which some facets are frequent, such as the power to empower or affirm an identity. On the one hand, the names and categories break through gender, geographical and temporal specificity to embrace terms, names, and idioms drawn from a range of different countries, traditions, languages, and time periods. On the other hand, these names and categories work to reinforce and affirm the geographical and cultural specificity of the Cariri region itself, embedding its pride and self-affirmation within the varied languages and heteronormative history of Portuguese colonization in that region. In selecting terms and categories to name, organize and celebrate their identities, the LGBTQ people of Cariri have taken the power to name: not as information intermediaries striving for objectivity and neutrality, but as committed members of a marginalized but vital community.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA. Auch als: http://www.iskocus.org/NASKO2017papers/NASKO2017_paper_32.pdf.
  15. Frické, M.: Logical division (2016) 0.02
    0.01680845 = product of:
      0.0336169 = sum of:
        0.008582841 = weight(_text_:information in 3183) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3183,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
        0.025034059 = product of:
          0.050068118 = sum of:
            0.050068118 = weight(_text_:organization in 3183) [ClassicSimilarity], result of:
              0.050068118 = score(doc=3183,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27854347 = fieldWeight in 3183, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Division is obviously important to Knowledge Organization. Typically, an organizational infrastructure might acknowledge three types of connecting relationships: class hierarchies, where some classes are subclasses of others, partitive hierarchies, where some items are parts of others, and instantiation, where some items are members of some classes (see Z39.19 ANSI/NISO 2005 as an example). The first two of these involve division (the third, instantiation, does not involve division). Logical division would usually be a part of hierarchical classification systems, which, in turn, are central to shelving in libraries, to subject classification schemes, to controlled vocabularies, and to thesauri. Partitive hierarchies, and partitive division, are often essential to controlled vocabularies, thesauri, and subject tagging systems. Partitive hierarchies also relate to the bearers of information; for example, a journal would typically have its component articles as parts and, in turn, they might have sections as their parts, and, of course, components might be arrived at by partitive division (see Tillett 2009 as an illustration). Finally, verbal division, disambiguating homographs, is basic to controlled vocabularies. Thus Division is a broad and relevant topic. This article, though, is going to focus on Logical Division.
    Source
    ISKO Encyclopedia of Knowledge Organization, ed. by B. Hjoerland. [http://www.isko.org/cyclo/logical_division]
  16. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.02
    0.01680845 = product of:
      0.0336169 = sum of:
        0.008582841 = weight(_text_:information in 3869) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3869,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
        0.025034059 = product of:
          0.050068118 = sum of:
            0.050068118 = weight(_text_:organization in 3869) [ClassicSimilarity], result of:
              0.050068118 = score(doc=3869,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27854347 = fieldWeight in 3869, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3869)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  17. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.02
    0.01680845 = product of:
      0.0336169 = sum of:
        0.008582841 = weight(_text_:information in 5997) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5997,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.025034059 = product of:
          0.050068118 = sum of:
            0.050068118 = weight(_text_:organization in 5997) [ClassicSimilarity], result of:
              0.050068118 = score(doc=5997,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27854347 = fieldWeight in 5997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
  18. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.02
    0.016597321 = product of:
      0.066389285 = sum of:
        0.066389285 = weight(_text_:standards in 2116) [ClassicSimilarity], result of:
          0.066389285 = score(doc=2116,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 2116, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=2116)
      0.25 = coord(1/4)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
  19. Dunsire, G.: Towards an internationalization of RDA management and development (2016) 0.02
    0.016597321 = product of:
      0.066389285 = sum of:
        0.066389285 = weight(_text_:standards in 2956) [ClassicSimilarity], result of:
          0.066389285 = score(doc=2956,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 2956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=2956)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the progress that has been made to internationalize the management and development of RDA: Resource Description and Access. RDA has been designed for an international environment, and is used in a number of countries worldwide. The paper describes the impact that international adoption of RDA had on the arrangements for its governance, including a new structure for ensuring international participation. It discusses the progress that has been made to improve wider input into the processes for its development, including working groups, liaisons with related standards organizations, and cataloguing hackathons. The paper is based on desk research of published resources, including websites, blogs, and conference presentations. The paper concludes that the intention to internationalize RDA is serious and has made a good use of its opportunities, although threats to its success remain.
  20. Delsey, T.: ¬The Making of RDA (2016) 0.02
    0.015395639 = product of:
      0.030791279 = sum of:
        0.01029941 = weight(_text_:information in 2946) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2946,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2946, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2946)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
              0.04098374 = score(doc=2946,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40