Search (62 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.03
    0.031837597 = product of:
      0.09551279 = sum of:
        0.09551279 = weight(_text_:title in 311) [ClassicSimilarity], result of:
          0.09551279 = score(doc=311,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3481261 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
  2. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.03
    0.028140724 = product of:
      0.08442217 = sum of:
        0.08442217 = weight(_text_:title in 1369) [ClassicSimilarity], result of:
          0.08442217 = score(doc=1369,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
      0.33333334 = coord(1/3)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
  3. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.03
    0.028140724 = product of:
      0.08442217 = sum of:
        0.08442217 = weight(_text_:title in 3437) [ClassicSimilarity], result of:
          0.08442217 = score(doc=3437,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
      0.33333334 = coord(1/3)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
  4. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.03
    0.026077747 = product of:
      0.07823324 = sum of:
        0.07823324 = product of:
          0.23469973 = sum of:
            0.23469973 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.23469973 = score(doc=400,freq=2.0), product of:
                0.41760176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049257044 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  5. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.02
    0.022512581 = product of:
      0.06753774 = sum of:
        0.06753774 = weight(_text_:title in 535) [ClassicSimilarity], result of:
          0.06753774 = score(doc=535,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.24616233 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
      0.33333334 = coord(1/3)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
  6. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.02
    0.019698508 = product of:
      0.059095524 = sum of:
        0.059095524 = weight(_text_:title in 2362) [ClassicSimilarity], result of:
          0.059095524 = score(doc=2362,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.21539204 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
      0.33333334 = coord(1/3)
    
    Abstract
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
  7. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.02
    0.019698508 = product of:
      0.059095524 = sum of:
        0.059095524 = weight(_text_:title in 4515) [ClassicSimilarity], result of:
          0.059095524 = score(doc=4515,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.21539204 = fieldWeight in 4515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4515)
      0.33333334 = coord(1/3)
    
    Abstract
    The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. This book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. This title combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
  8. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.017385166 = product of:
      0.052155495 = sum of:
        0.052155495 = product of:
          0.15646648 = sum of:
            0.15646648 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.15646648 = score(doc=701,freq=2.0), product of:
                0.41760176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049257044 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  9. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.017385166 = product of:
      0.052155495 = sum of:
        0.052155495 = product of:
          0.15646648 = sum of:
            0.15646648 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.15646648 = score(doc=5820,freq=2.0), product of:
                0.41760176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049257044 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  10. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.01112275 = product of:
      0.03336825 = sum of:
        0.03336825 = product of:
          0.0667365 = sum of:
            0.0667365 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.0667365 = score(doc=6089,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.11-22
  11. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.01112275 = product of:
      0.03336825 = sum of:
        0.03336825 = product of:
          0.0667365 = sum of:
            0.0667365 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.0667365 = score(doc=5576,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13.12.2017 14:17:22
  12. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.01
    0.01112275 = product of:
      0.03336825 = sum of:
        0.03336825 = product of:
          0.0667365 = sum of:
            0.0667365 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.0667365 = score(doc=539,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    26.12.2011 13:22:07
  13. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.01112275 = product of:
      0.03336825 = sum of:
        0.03336825 = product of:
          0.0667365 = sum of:
            0.0667365 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.0667365 = score(doc=3406,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  14. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.01
    0.01112275 = product of:
      0.03336825 = sum of:
        0.03336825 = product of:
          0.0667365 = sum of:
            0.0667365 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.0667365 = score(doc=4523,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  15. Pieterse, V.; Kourie, D.G.: Lists, taxonomies, lattices, thesauri and ontologies : paving a pathway through a terminological jungle (2014) 0.01
    0.010593532 = product of:
      0.031780593 = sum of:
        0.031780593 = product of:
          0.063561186 = sum of:
            0.063561186 = weight(_text_:catalogue in 1386) [ClassicSimilarity], result of:
              0.063561186 = score(doc=1386,freq=2.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.2669927 = fieldWeight in 1386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1386)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article seeks to resolve ambiguities and create a shared vocabulary with reference to classification-related terms. Due to the need to organize information in all disciplines, knowledge organization systems (KOSs) with varying attributes, content and structures have been developed independently in different domains. These scattered developments have given rise to a conglomeration of classification-related terms which are often used inconsistently both within and across different research fields. This terminological conundrum has impeded communication among researchers. To build the ideal Semantic Web, this problem will have to be surmounted. A common nomenclature is needed to incorporate the vast body of semantic information embedded in existing classifications when developing new systems and to facilitate interoperability among diverse systems. To bridge the terminological gap between the researchers and practitioners of disparate disciplines, we have identified five broad classes of KOSs: lists, taxonomies, lattices, thesauri and ontologies. We provide definitions of the terms catalogue, index, lexicon, knowledge base and topic map. After explaining the meaning and usage of these terms, we delineate how they relate to one another as well as to the different types of KOSs. Our definitions are not intended to replace established definitions but rather to clarify their respective meanings and to advocate their proper usage. In particular we caution against the indiscriminate use of the term ontology in contexts where, in our view, the term thesaurus would be more appropriate.
  16. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.009437967 = product of:
      0.0283139 = sum of:
        0.0283139 = product of:
          0.0566278 = sum of:
            0.0566278 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.0566278 = score(doc=3355,freq=4.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  17. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.0088982 = product of:
      0.026694598 = sum of:
        0.026694598 = product of:
          0.053389195 = sum of:
            0.053389195 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.053389195 = score(doc=3376,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.2010 16:58:22
  18. OWL Web Ontology Language Test Cases (2004) 0.01
    0.0088982 = product of:
      0.026694598 = sum of:
        0.026694598 = product of:
          0.053389195 = sum of:
            0.053389195 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.053389195 = score(doc=4685,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 8.2011 13:33:22
  19. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.01
    0.0088982 = product of:
      0.026694598 = sum of:
        0.026694598 = product of:
          0.053389195 = sum of:
            0.053389195 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.053389195 = score(doc=4476,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1.10.2018 14:13:22
  20. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.0088982 = product of:
      0.026694598 = sum of:
        0.026694598 = product of:
          0.053389195 = sum of:
            0.053389195 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.053389195 = score(doc=318,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 5.2021 12:43:05

Authors

Years

Languages

  • e 51
  • d 11

Types

  • a 44
  • el 15
  • x 6
  • m 4
  • n 1
  • r 1
  • More… Less…