Search (198 results, page 1 of 10)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.18
    0.180714 = product of:
      0.30119 = sum of:
        0.07156433 = product of:
          0.214693 = sum of:
            0.214693 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.214693 = score(doc=400,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.214693 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.214693 = score(doc=400,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 400) [ClassicSimilarity], result of:
              0.029865343 = score(doc=400,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.100049615 = product of:
      0.25012404 = sum of:
        0.047709554 = product of:
          0.14312866 = sum of:
            0.14312866 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14312866 = score(doc=5820,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.2024145 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.2024145 = score(doc=5820,freq=4.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.07633529 = product of:
      0.19083822 = sum of:
        0.047709554 = product of:
          0.14312866 = sum of:
            0.14312866 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14312866 = score(doc=701,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14312866 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14312866 = score(doc=701,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. OWL Web Ontology Language Guide (2004) 0.06
    0.058129102 = product of:
      0.14532275 = sum of:
        0.13287885 = weight(_text_:readable in 4687) [ClassicSimilarity], result of:
          0.13287885 = score(doc=4687,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 4687, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4687) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  5. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.04
    0.043688577 = product of:
      0.109221436 = sum of:
        0.09395953 = weight(_text_:readable in 3466) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3466,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.015261904 = product of:
          0.030523809 = sum of:
            0.030523809 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
              0.030523809 = score(doc=3466,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.19345059 = fieldWeight in 3466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3466)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
  6. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.04
    0.04256137 = product of:
      0.106403425 = sum of:
        0.09395953 = weight(_text_:readable in 3297) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3297,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3297) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3297,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3297)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
  7. Stuart, D.: Practical ontologies for information professionals (2016) 0.04
    0.035527233 = product of:
      0.08881808 = sum of:
        0.06577167 = weight(_text_:readable in 5152) [ClassicSimilarity], result of:
          0.06577167 = score(doc=5152,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.23758507 = fieldWeight in 5152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5152)
        0.023046413 = product of:
          0.046092827 = sum of:
            0.046092827 = weight(_text_:data in 5152) [ClassicSimilarity], result of:
              0.046092827 = score(doc=5152,freq=14.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.32351238 = fieldWeight in 5152, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5152)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Practical Ontologies for Information Professionals provides an accessible introduction and exploration of ontologies and demonstrates their value to information professionals. More data and information is being created than ever before. Ontologies, formal representations of knowledge with rich semantic relationships, have become increasingly important in the context of today's information overload and data deluge. The publishing and sharing of explicit explanations for a wide variety of conceptualizations, in a machine readable format, has the power to both improve information retrieval and discover new knowledge. Information professionals are key contributors to the development of new, and increasingly useful, ontologies. Practical Ontologies for Information Professionals provides an accessible introduction to the following: defining the concept of ontologies and why they are increasingly important to information professionals ontologies and the semantic web existing ontologies, such as RDF, RDFS, SKOS, and OWL2 adopting and building ontologies, showing how to avoid repetition of work and how to build a simple ontology interrogating ontologies for reuse the future of ontologies and the role of the information professional in their development and use. This book will be useful reading for information professionals in libraries and other cultural heritage institutions who work with digitalization projects, cataloguing and classification and information retrieval. It will also be useful to LIS students who are new to the field.
    Content
    C H A P T E R 1 What is an ontology?; Introduction; The data deluge and information overload; Defining terms; Knowledge organization systems and ontologies; Ontologies, metadata and linked data; What can an ontology do?; Ontologies and information professionals; Alternatives to ontologies; The aims of this book; The structure of this book; C H A P T E R 2 Ontologies and the semantic web; Introduction; The semantic web and linked data; Resource Description Framework (RDF); Classes, subclasses and properties; The semantic web stack; Embedded RDF; Alternative semantic visionsLibraries and the semantic web; Other cultural heritage institutions and the semantic web; Other organizations and the semantic web; Conclusion; C H A P T E R 3 Existing ontologies; Introduction; Ontology documentation; Ontologies for representing ontologies; Ontologies for libraries; Upper ontologies; Cultural heritage data models; Ontologies for the web; Conclusion; C H A P T E R 4 Adopting ontologies; Introduction; Reusing ontologies: application profiles and data models; Identifying ontologies; The ideal ontology discovery tool; Selection criteria; Conclusion C H A P T E R 5 Building ontologiesIntroduction; Approaches to building an ontology; The twelve steps; Ontology development example: Bibliometric Metrics Ontology element set; Conclusion; C H A P T E R 6 Interrogating ontologies; Introduction; Interrogating ontologies for reuse; Interrogating a knowledge base; Understanding ontology use; Conclusion; C H A P T E R 7 The future of ontologies and the information professional; Introduction; The future of ontologies for knowledge discovery; The future role of library and information professionals; The practical development of ontologies
  8. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.03
    0.033142954 = product of:
      0.082857385 = sum of:
        0.030179864 = weight(_text_:bibliographic in 4796) [ClassicSimilarity], result of:
          0.030179864 = score(doc=4796,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.05267752 = product of:
          0.10535504 = sum of:
            0.10535504 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
              0.10535504 = score(doc=4796,freq=56.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7394569 = fieldWeight in 4796, product of:
                  7.483315 = tf(freq=56.0), with freq of:
                    56.0 = termFreq=56.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  9. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.03
    0.030975739 = product of:
      0.077439345 = sum of:
        0.06748423 = weight(_text_:bibliographic in 535) [ClassicSimilarity], result of:
          0.06748423 = score(doc=535,freq=10.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3847152 = fieldWeight in 535, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
        0.009955115 = product of:
          0.01991023 = sum of:
            0.01991023 = weight(_text_:data in 535) [ClassicSimilarity], result of:
              0.01991023 = score(doc=535,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.1397442 = fieldWeight in 535, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
  10. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.03
    0.029792959 = product of:
      0.074482396 = sum of:
        0.06577167 = weight(_text_:readable in 53) [ClassicSimilarity], result of:
          0.06577167 = score(doc=53,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.23758507 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
        0.008710725 = product of:
          0.01742145 = sum of:
            0.01742145 = weight(_text_:data in 53) [ClassicSimilarity], result of:
              0.01742145 = score(doc=53,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.12227618 = fieldWeight in 53, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=53)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  11. Zhang, L.: Linking information through function (2014) 0.02
    0.024080986 = product of:
      0.060202464 = sum of:
        0.045269795 = weight(_text_:bibliographic in 1526) [ClassicSimilarity], result of:
          0.045269795 = score(doc=1526,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 1526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1526)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 1526) [ClassicSimilarity], result of:
              0.029865343 = score(doc=1526,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 1526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1526)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  12. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.02
    0.024051603 = product of:
      0.12025801 = sum of:
        0.12025801 = sum of:
          0.089734204 = weight(_text_:data in 106) [ClassicSimilarity], result of:
            0.089734204 = score(doc=106,freq=26.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.6298187 = fieldWeight in 106, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
          0.030523809 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
            0.030523809 = score(doc=106,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  13. Koutsomitropoulos, D.A.; Solomou, G.D.; Alexopoulos, A.D.; Papatheodorou, T.S.: Semantic metadata interoperability and inference-based querying in digital repositories (2009) 0.02
    0.022550289 = product of:
      0.11275144 = sum of:
        0.11275144 = weight(_text_:readable in 3731) [ClassicSimilarity], result of:
          0.11275144 = score(doc=3731,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 3731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=3731)
      0.2 = coord(1/5)
    
    Abstract
    Metadata applications have evolved in time into highly structured "islands of information" about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this article we take upon the well-established Dublin Core metadata standard as well as other metadata schemata, which often appear in digital repositories set-ups, and suggest a proper Semantic Web OWL ontology. In this process the authors cope with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual metadata records. The authors conclude by presenting a working prototype that provides for inference-based querying on top of digital repositories.
  14. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.02216464 = product of:
      0.11082319 = sum of:
        0.11082319 = sum of:
          0.049775574 = weight(_text_:data in 5576) [ClassicSimilarity], result of:
            0.049775574 = score(doc=5576,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.34936053 = fieldWeight in 5576, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.078125 = fieldNorm(doc=5576)
          0.061047617 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
            0.061047617 = score(doc=5576,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.38690117 = fieldWeight in 5576, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5576)
      0.2 = coord(1/5)
    
    Abstract
    Diese Publikation beschreibt die Zusammenhänge zwischen wissenshaltigen begriffsorientierten Terminologien, Ontologien, Big Data und künstliche Intelligenz.
    Date
    13.12.2017 14:17:22
  15. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.02
    0.021956686 = product of:
      0.109783426 = sum of:
        0.109783426 = sum of:
          0.07315486 = weight(_text_:data in 2024) [ClassicSimilarity], result of:
            0.07315486 = score(doc=2024,freq=12.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.513453 = fieldWeight in 2024, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.036628567 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.036628567 = score(doc=2024,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.2 = coord(1/5)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  16. Broughton, V.: Language related problems in the construction of faceted terminologies and their automatic management (2008) 0.02
    0.020067489 = product of:
      0.050168723 = sum of:
        0.03772483 = weight(_text_:bibliographic in 2497) [ClassicSimilarity], result of:
          0.03772483 = score(doc=2497,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 2497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 2497) [ClassicSimilarity], result of:
              0.024887787 = score(doc=2497,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 2497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2497)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    The paper describes current work on the generation of a thesaurus format from the schedules of the Bliss Bibliographic Classification 2nd edition (BC2). The practical problems that occur in moving from a concept based approach to a terminological approach cluster around issues of vocabulary control that are not fully addressed in a systematic structure. These difficulties can be exacerbated within domains in the humanities because large numbers of culture specific terms may need to be accommodated in any thesaurus. The ways in which these problems can be resolved within the context of a semi-automated approach to the thesaurus generation have consequences for the management of classification data in the source vocabulary. The way in which the vocabulary is marked up for the purpose of machine manipulation is described, and some of the implications for editorial policy are discussed and examples given. The value of the classification notation as a language independent representation and mapping tool should not be sacrificed in such an exercise.
  17. Broughton, V.: Facet analysis as a tool for modelling subject domains and terminologies (2011) 0.02
    0.020067489 = product of:
      0.050168723 = sum of:
        0.03772483 = weight(_text_:bibliographic in 4826) [ClassicSimilarity], result of:
          0.03772483 = score(doc=4826,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4826) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4826,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Facet analysis is proposed as a general theory of knowledge organization, with an associated methodology that may be applied to the development of terminology tools in a variety of contexts and formats. Faceted classifications originated as a means of representing complexity in semantic content that facilitates logical organization and effective retrieval in a physical environment. This is achieved through meticulous analysis of concepts, their structural and functional status (based on fundamental categories), and their inter-relationships. These features provide an excellent basis for the general conceptual modelling of domains, and for the generation of KOS other than systematic classifications. This is demonstrated by the adoption of a faceted approach to many web search and visualization tools, and by the emergence of a facet based methodology for the construction of thesauri. Current work on the Bliss Bibliographic Classification (Second Edition) is investigating the ways in which the full complexity of faceted structures may be represented through encoded data, capable of generating intellectually and mechanically compatible forms of indexing tools from a single source. It is suggested that a number of research questions relating to the Semantic Web could be tackled through the medium of facet analysis.
  18. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.02
    0.020067489 = product of:
      0.050168723 = sum of:
        0.03772483 = weight(_text_:bibliographic in 3437) [ClassicSimilarity], result of:
          0.03772483 = score(doc=3437,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3437) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3437,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
  19. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.02
    0.020067489 = product of:
      0.050168723 = sum of:
        0.03772483 = weight(_text_:bibliographic in 3958) [ClassicSimilarity], result of:
          0.03772483 = score(doc=3958,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 3958, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3958) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3958,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3958)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
  20. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.02
    0.01880732 = product of:
      0.094036594 = sum of:
        0.094036594 = sum of:
          0.042235978 = weight(_text_:data in 3355) [ClassicSimilarity], result of:
            0.042235978 = score(doc=3355,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.29644224 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.05180062 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.05180062 = score(doc=3355,freq=4.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.2 = coord(1/5)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Communication in science / Data processing
    Subject
    Communication in science / Data processing

Authors

Years

Languages

  • e 169
  • d 24
  • pt 3
  • f 1
  • More… Less…

Types

  • a 143
  • el 60
  • m 12
  • x 10
  • n 6
  • s 6
  • p 2
  • r 2
  • More… Less…

Subjects