Search (14 results, page 1 of 1)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.02
    0.020749543 = product of:
      0.0968312 = sum of:
        0.054539118 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.054539118 = score(doc=1094,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.036238287 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.036238287 = score(doc=1094,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.0060537956 = weight(_text_:information in 1094) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=1094,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.21428572 = coord(3/14)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  2. Gabler, S.: Thesauri - a Toolbox for Information Retrieval (2023) 0.02
    0.01632566 = product of:
      0.07618641 = sum of:
        0.044148326 = weight(_text_:bibliothek in 114) [ClassicSimilarity], result of:
          0.044148326 = score(doc=114,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.36288103 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
        0.008071727 = weight(_text_:information in 114) [ClassicSimilarity], result of:
          0.008071727 = score(doc=114,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
        0.023966359 = weight(_text_:retrieval in 114) [ClassicSimilarity], result of:
          0.023966359 = score(doc=114,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
      0.21428572 = coord(3/14)
    
    Source
    Bibliothek: Forschung und Praxis. 47(2023) H.2, S.189-199
  3. Rocha Souza, R.; Lemos, D.: a comparative analysis : Knowledge organization systems for the representation of multimedia resources on the Web (2020) 0.01
    0.0050917473 = product of:
      0.03564223 = sum of:
        0.029588435 = weight(_text_:web in 5993) [ClassicSimilarity], result of:
          0.029588435 = score(doc=5993,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3059541 = fieldWeight in 5993, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5993)
        0.0060537956 = weight(_text_:information in 5993) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=5993,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 5993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5993)
      0.14285715 = coord(2/14)
    
    Abstract
    The lack of standardization in the production, organization and dissemination of information in documentation centers and institutions alike, as a result from the digitization of collections and their availability on the internet has called for integration efforts. The sheer availability of multimedia content has fostered the development of many distinct and, most of the time, independent metadata standards for its description. This study aims at presenting and comparing the existing standards of metadata, vocabularies and ontologies for multimedia annotation and also tries to offer a synthetic overview of its main strengths and weaknesses, aiding efforts for semantic integration and enhancing the findability of available multimedia resources on the web. We also aim at unveiling the characteristics that could, should and are perhaps not being highlighted in the characterization of multimedia resources.
  4. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.00
    0.003446667 = product of:
      0.024126668 = sum of:
        0.017435152 = weight(_text_:web in 5757) [ClassicSimilarity], result of:
          0.017435152 = score(doc=5757,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 5757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5757)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.020074548 = score(doc=5757,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  5. Sartini, B.; Erp, M. van; Gangemi, A.: Marriage is a peach and a chalice : modelling cultural symbolism on the Semantic Web (2021) 0.00
    0.002588449 = product of:
      0.036238287 = sum of:
        0.036238287 = weight(_text_:web in 557) [ClassicSimilarity], result of:
          0.036238287 = score(doc=557,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 557, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=557)
      0.071428575 = coord(1/14)
    
    Abstract
    In this work, we fill the gap in the Semantic Web in the context of Cultural Symbolism. Building upon earlier work in \citesartini_towards_2021, we introduce the Simulation Ontology, an ontology that models the background knowledge of symbolic meanings, developed by combining the concepts taken from the authoritative theory of Simulacra and Simulations of Jean Baudrillard with symbolic structures and content taken from "Symbolism: a Comprehensive Dictionary'' by Steven Olderr. We re-engineered the symbolic knowledge already present in heterogeneous resources by converting it into our ontology schema to create HyperReal, the first knowledge graph completely dedicated to cultural symbolism. A first experiment run on the knowledge graph is presented to show the potential of quantitative research on symbolism.
    Theme
    Semantic Web
  6. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.00
    0.0023472693 = product of:
      0.016430885 = sum of:
        0.0070627616 = weight(_text_:information in 997) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=997,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.009368123 = product of:
          0.028104367 = sum of:
            0.028104367 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.028104367 = score(doc=997,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Date
    22. 6.2023 18:23:31
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.866-878
  7. Rodrigues Barbosa, E.; Godoy Viera, A.F.: Relações semânticas e interoperabilidade em tesauros representados em SKOS : uma revisao sistematica da literatura (2022) 0.00
    0.0021134596 = product of:
      0.029588435 = sum of:
        0.029588435 = weight(_text_:web in 254) [ClassicSimilarity], result of:
          0.029588435 = score(doc=254,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3059541 = fieldWeight in 254, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=254)
      0.071428575 = coord(1/14)
    
    Abstract
    Objetivo: Este estudo tem como objetivo compreender como o modelo de dados Simple Knowledge Organization System e seus modelos de extensão tem sido utilizado para promover a interoperabilidade com outros vocabulários e refinar as relações semânticas em tesauros na web. Metodologia: Utiliza a pesquisa documental nos guias de referência dos modelos de dados utilizados para representar os tesauros na web. Resultados: os modelos de dados têm sido utilizados para representar os termos e suas variações linguísticas, os relacionamentos entre grupos e subgrupos de conceitos, numa perspectiva intra-vocabulários, e os relacionamentos entre conceitos de vocabulários distintos, numa perspectiva inter-vocabulários. Conclusões: O uso do Simple Knowledge Organization System, e dos seus modelos de extensão contribuem para uma melhor estruturação dos conceitos em tesauros. Os modelos de extensão são apropriados para a representação dos relacionamentos de equivalência compostos, ou para a estruturação de grupos e subgrupos de conceitos em tesauros.
  8. Steeg, F.; Pohl, A.: ¬Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) (2021) 0.00
    0.0017612164 = product of:
      0.02465703 = sum of:
        0.02465703 = weight(_text_:web in 367) [ClassicSimilarity], result of:
          0.02465703 = score(doc=367,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 367, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=367)
      0.071428575 = coord(1/14)
    
    Abstract
    Normdaten spielen speziell im Hinblick auf die Qualität der Inhaltserschließung bibliografischer und archivalischer Ressourcen eine wichtige Rolle. Ein konkretes Ziel der Inhaltserschließung ist z. B., dass alle Werke über Hermann Hesse einheitlich zu finden sind. Hier bieten Normdaten eine Lösung, indem z. B. bei der Erschließung einheitlich die GND-Nummer 11855042X für Hermann Hesse verwendet wird. Das Ergebnis ist eine höhere Qualität der Inhaltserschließung vor allem im Sinne von Einheitlichkeit und Eindeutigkeit und, daraus resultierend, eine bessere Auffindbarkeit. Werden solche Entitäten miteinander verknüpft, z. B. Hermann Hesse mit einem seiner Werke, entsteht ein Knowledge Graph, wie ihn etwa Google bei der Inhaltserschließung des Web verwendet (Singhal 2012). Die Entwicklung des Google Knowledge Graph und das hier vorgestellte Protokoll sind historisch miteinander verbunden: OpenRefine wurde ursprünglich als Google Refine entwickelt, und die Funktionalität zum Abgleich mit externen Datenquellen (Reconciliation) wurde ursprünglich zur Einbindung von Freebase entwickelt, einer der Datenquellen des Google Knowledge Graph. Freebase wurde später in Wikidata integriert. Schon Google Refine wurde zum Abgleich mit Normdaten verwendet, etwa den Library of Congress Subject Headings (Hooland et al. 2013).
  9. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.00
    0.0017612164 = product of:
      0.02465703 = sum of:
        0.02465703 = weight(_text_:web in 600) [ClassicSimilarity], result of:
          0.02465703 = score(doc=600,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.071428575 = coord(1/14)
    
    Theme
    Semantic Web
  10. Schreur, P.E.: ¬The use of Linked Data and artificial intelligence as key elements in the transformation of technical services (2020) 0.00
    0.0017435154 = product of:
      0.024409214 = sum of:
        0.024409214 = weight(_text_:web in 125) [ClassicSimilarity], result of:
          0.024409214 = score(doc=125,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
      0.071428575 = coord(1/14)
    
    Abstract
    Library Technical Services have benefited from numerous stimuli. Although initially looked at with suspicion, transitions such as the move from catalog cards to the MARC formats have proven enormously helpful to libraries and their patrons. Linked data and Artificial Intelligence (AI) hold the same promise. Through the conversion of metadata surrogates (cataloging) to linked open data, libraries can represent their resources on the Semantic Web. But in order to provide some form of controlled access to unstructured data, libraries must reach beyond traditional cataloging to new tools such as AI to provide consistent access to a growing world of full-text resources.
  11. Rölke, H.; Weichselbraun, A.: Ontologien und Linked Open Data (2023) 0.00
    0.001245368 = product of:
      0.017435152 = sum of:
        0.017435152 = weight(_text_:web in 788) [ClassicSimilarity], result of:
          0.017435152 = score(doc=788,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=788)
      0.071428575 = coord(1/14)
    
    Abstract
    Der Begriff Ontologie stammt ursprünglich aus der Metaphysik, einem Teilbereich der Philosophie, welcher sich um die Erkenntnis der Grundstruktur und Prinzipien der Wirklichkeit bemüht. Ontologien befassen sich dabei mit der Frage, welche Dinge auf der fundamentalsten Ebene existieren, wie sich diese strukturieren lassen und in welchen Beziehungen diese zueinanderstehen. In der Informationswissenschaft hingegen werden Ontologien verwendet, um das Vokabular für die Beschreibung von Wissensbereichen zu formalisieren. Ziel ist es, dass alle Akteure, die in diesen Bereichen tätig sind, die gleichen Konzepte und Begrifflichkeiten verwenden, um eine reibungslose Zusammenarbeit ohne Missverständnisse zu ermöglichen. So definierte zum Beispiel die Dublin Core Metadaten Initiative 15 Kernelemente, die zur Beschreibung von elektronischen Ressourcen und Medien verwendet werden können. Jedes Element wird durch eine eindeutige Bezeichnung (zum Beispiel identifier) und eine zugehörige Konzeption, welche die Bedeutung dieser Bezeichnung möglichst exakt festlegt, beschrieben. Ein Identifier muss zum Beispiel laut der Dublin Core Ontologie ein Dokument basierend auf einem zugehörigen Katalog eindeutig identifizieren. Je nach Katalog kämen daher zum Beispiel eine ISBN (Katalog von Büchern), ISSN (Katalog von Zeitschriften), URL (Web), DOI (Publikationsdatenbank) etc. als Identifier in Frage.
  12. Cheng, Y.-Y.; Xia, Y.: ¬A systematic review of methods for aligning, mapping, merging taxonomies in information sciences (2023) 0.00
    5.0960475E-4 = product of:
      0.0071344664 = sum of:
        0.0071344664 = weight(_text_:information in 1029) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=1029,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 1029, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1029)
      0.071428575 = coord(1/14)
    
    Abstract
    The purpose of this study is to provide a systematic literature review on taxonomy alignment methods in information science to explore the common research pipeline and characteristics. Design/methodology/approach The authors implement a five-step systematic literature review process relating to taxonomy alignment. They take on a knowledge organization system (KOS) perspective, and specifically examining the level of KOS on "taxonomies." Findings They synthesize the matching dimensions of 28 taxonomy alignment studies in terms of the taxonomy input, approach and output. In the input dimension, they develop three characteristics: tree shapes, variable names and symmetry; for approach: methodology, unit of matching, comparison type and relation type; for output: the number of merged solutions and whether original taxonomies are preserved in the solutions. Research limitations/implications The main research implications of this study are threefold: (1) to enhance the understanding of the characteristics of a taxonomy alignment work; (2) to provide a novel categorization of taxonomy alignment approaches into natural language processing approach, logic-based approach and heuristic-based approach; (3) to provide a methodological guideline on the must-include characteristics for future taxonomy alignment research. Originality/value There is no existing comprehensive review on the alignment of "taxonomies". Further, no other mapping survey research has discussed the comparison from a KOS perspective. Using a KOS lens is critical in understanding the broader picture of what other similar systems of organizations are, and enables us to define taxonomies more precisely.
  13. Lee, S.: Pidgin metadata framework as a mediator for metadata interoperability (2021) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 654) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=654,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=654)
      0.071428575 = coord(1/14)
    
    Abstract
    A pidgin metadata framework based on the concept of pidgin metadata is proposed to complement the limitations of existing approaches to metadata interoperability and to achieve more reliable metadata interoperability. The framework consists of three layers, with a hierarchical structure, and reflects the semantic and structural characteristics of various metadata. Layer 1 performs both an external function, serving as an anchor for semantic association between metadata elements, and an internal function, providing semantic categories that can encompass detailed elements. Layer 2 is an arbitrary layer composed of substantial elements from existing metadata and performs a function in which different metadata elements describing the same or similar aspects of information resources are associated with the semantic categories of Layer 1. Layer 3 implements the semantic relationships between Layer 1 and Layer 2 through the Resource Description Framework syntax. With this structure, the pidgin metadata framework can establish the criteria for semantic connection between different elements and fully reflect the complexity and heterogeneity among various metadata. Additionally, it is expected to provide a bibliographic environment that can achieve more reliable metadata interoperability than existing approaches by securing the communication between metadata.
  14. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 977) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=977,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 977, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
      0.071428575 = coord(1/14)
    
    Source
    DESIDOC journal of library and information technology. 43(2023) no.1, S.45-54