Search (423 results, page 1 of 22)

  • × type_ss:"el"
  1. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.08
    0.080422625 = product of:
      0.16084525 = sum of:
        0.1387685 = weight(_text_:representation in 3671) [ClassicSimilarity], result of:
          0.1387685 = score(doc=3671,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.7043805 = fieldWeight in 3671, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
        0.022076748 = product of:
          0.066230245 = sum of:
            0.066230245 = weight(_text_:29 in 3671) [ClassicSimilarity], result of:
              0.066230245 = score(doc=3671,freq=4.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.43971092 = fieldWeight in 3671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Semantic networks produced from human data have statistical properties that cannot be easily captured by spatial representations. We explore a probabilistic approach to semantic representation that explicitly models the probability with which words occurin diffrent contexts, and hence captures the probabilistic relationships between words. We show that this representation has statistical properties consistent with the large-scale structure of semantic networks constructed by humans, and trace the origins of these properties.
    Date
    29. 6.2015 14:55:01
    29. 6.2015 16:09:05
  2. Priss, U.: Faceted knowledge representation (1999) 0.08
    0.076871485 = product of:
      0.15374297 = sum of:
        0.14020655 = weight(_text_:representation in 2654) [ClassicSimilarity], result of:
          0.14020655 = score(doc=2654,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.71167994 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.013536418 = product of:
          0.04060925 = sum of:
            0.04060925 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.04060925 = score(doc=2654,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  3. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.06
    0.0643871 = product of:
      0.1287742 = sum of:
        0.113304004 = weight(_text_:representation in 318) [ClassicSimilarity], result of:
          0.113304004 = score(doc=318,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.57512426 = fieldWeight in 318, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.015470191 = product of:
          0.04641057 = sum of:
            0.04641057 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.04641057 = score(doc=318,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In der Session "Knowledge Representation" auf der ISI 2021 wurden unter der Moderation von Jürgen Reischer (Uni Regensburg) drei Projekte vorgestellt, in denen Knowledge Representation mit RDF umgesetzt wird. Die Domänen sind erfreulich unterschiedlich, die gemeinsame Klammer indes ist die Absicht, den Zugang zu Forschungsdaten zu verbessern: - Japanese Visual Media Graph - Taxonomy of Digital Research Activities in the Humanities - Forschungsdaten im konzeptuellen Modell von FRBR
    Date
    22. 5.2021 12:43:05
  4. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.06
    0.05789217 = product of:
      0.11578434 = sum of:
        0.10407638 = weight(_text_:representation in 4641) [ClassicSimilarity], result of:
          0.10407638 = score(doc=4641,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.5282854 = fieldWeight in 4641, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.011707964 = product of:
          0.035123892 = sum of:
            0.035123892 = weight(_text_:29 in 4641) [ClassicSimilarity], result of:
              0.035123892 = score(doc=4641,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23319192 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Date
    29. 7.2011 14:44:56
  5. Priss, U.: Description logic and faceted knowledge representation (1999) 0.06
    0.05783951 = product of:
      0.11567902 = sum of:
        0.10407638 = weight(_text_:representation in 2655) [ClassicSimilarity], result of:
          0.10407638 = score(doc=2655,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.5282854 = fieldWeight in 2655, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.011602643 = product of:
          0.034807928 = sum of:
            0.034807928 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.034807928 = score(doc=2655,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  6. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.06
    0.05633871 = product of:
      0.11267742 = sum of:
        0.099141 = weight(_text_:representation in 540) [ClassicSimilarity], result of:
          0.099141 = score(doc=540,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.50323373 = fieldWeight in 540, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=540)
        0.013536418 = product of:
          0.04060925 = sum of:
            0.04060925 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.04060925 = score(doc=540,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  7. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.05
    0.04551278 = product of:
      0.09102556 = sum of:
        0.08011803 = weight(_text_:representation in 1004) [ClassicSimilarity], result of:
          0.08011803 = score(doc=1004,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 1004, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.0109075345 = product of:
          0.032722604 = sum of:
            0.032722604 = weight(_text_:theory in 1004) [ClassicSimilarity], result of:
              0.032722604 = score(doc=1004,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.18377672 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  8. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.04
    0.04396167 = product of:
      0.08792334 = sum of:
        0.08011803 = weight(_text_:representation in 4639) [ClassicSimilarity], result of:
          0.08011803 = score(doc=4639,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 4639, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0078053097 = product of:
          0.023415929 = sum of:
            0.023415929 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.023415929 = score(doc=4639,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  9. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.04
    0.041881282 = product of:
      0.083762564 = sum of:
        0.07010327 = weight(_text_:representation in 4644) [ClassicSimilarity], result of:
          0.07010327 = score(doc=4644,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.013659291 = product of:
          0.040977873 = sum of:
            0.040977873 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.040977873 = score(doc=4644,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
  10. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.04
    0.041881282 = product of:
      0.083762564 = sum of:
        0.07010327 = weight(_text_:representation in 156) [ClassicSimilarity], result of:
          0.07010327 = score(doc=156,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.013659291 = product of:
          0.040977873 = sum of:
            0.040977873 = weight(_text_:29 in 156) [ClassicSimilarity], result of:
              0.040977873 = score(doc=156,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27205724 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
    Date
    29. 5.2012 10:17:08
  11. Nielsen, R.D.; Ward, W.; Martin, J.H.; Palmer, M.: Extracting a representation from text for semantic analysis (2008) 0.04
    0.040059015 = product of:
      0.16023606 = sum of:
        0.16023606 = weight(_text_:representation in 3365) [ClassicSimilarity], result of:
          0.16023606 = score(doc=3365,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.81334853 = fieldWeight in 3365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3365)
      0.25 = coord(1/4)
    
    Abstract
    We present a novel fine-grained semantic representation of text and an approach to constructing it. This representation is largely extractable by today's technologies and facilitates more detailed semantic analysis. We discuss the requirements driving the representation, suggest how it might be of value in the automated tutoring domain, and provide evidence of its validity.
  12. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.04
    0.038224913 = product of:
      0.07644983 = sum of:
        0.060088523 = weight(_text_:representation in 761) [ClassicSimilarity], result of:
          0.060088523 = score(doc=761,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.016361302 = product of:
          0.049083903 = sum of:
            0.049083903 = weight(_text_:theory in 761) [ClassicSimilarity], result of:
              0.049083903 = score(doc=761,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27566507 = fieldWeight in 761, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=761)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  13. Facet analytical theory for managing knowledge structure in the humanities : FATKS (2003) 0.04
    0.03742569 = product of:
      0.14970276 = sum of:
        0.14970276 = product of:
          0.22455412 = sum of:
            0.13089041 = weight(_text_:theory in 2526) [ClassicSimilarity], result of:
              0.13089041 = score(doc=2526,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.7351069 = fieldWeight in 2526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.125 = fieldNorm(doc=2526)
            0.093663715 = weight(_text_:29 in 2526) [ClassicSimilarity], result of:
              0.093663715 = score(doc=2526,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.6218451 = fieldWeight in 2526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2526)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 8.2004 9:17:18
  14. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.04
    0.035898242 = product of:
      0.071796484 = sum of:
        0.060088523 = weight(_text_:representation in 4640) [ClassicSimilarity], result of:
          0.060088523 = score(doc=4640,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.011707964 = product of:
          0.035123892 = sum of:
            0.035123892 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.035123892 = score(doc=4640,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Date
    29. 7.2011 14:44:56
  15. Panzer, M.: Designing identifiers for the DDC (2007) 0.03
    0.033848263 = product of:
      0.06769653 = sum of:
        0.030044261 = weight(_text_:representation in 1752) [ClassicSimilarity], result of:
          0.030044261 = score(doc=1752,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.15250285 = fieldWeight in 1752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1752)
        0.03765227 = product of:
          0.0564784 = sum of:
            0.017561946 = weight(_text_:29 in 1752) [ClassicSimilarity], result of:
              0.017561946 = score(doc=1752,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.11659596 = fieldWeight in 1752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1752)
            0.038916454 = weight(_text_:22 in 1752) [ClassicSimilarity], result of:
              0.038916454 = score(doc=1752,freq=10.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2595412 = fieldWeight in 1752, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1752)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Content
    Some examples of identifiers for concepts follow: <http://dewey.info/concept/338.4/en/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the English-language version of Edition 22. <http://dewey.info/concept/338.4/de/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the German-language version of Edition 22. <http://dewey.info/concept/333.7-333.9/> This identifier is used to retrieve or identify the 333.7-333.9 concept across all editions and language versions. <http://dewey.info/concept/333.7-333.9/about.skos> This identifier is used to retrieve a SKOS representation of the 333.7-333.9 concept (using the "resource" element). There are several open issues at this preliminary stage of development: Use cases: URIs need to represent the range of statements or questions that could be submitted to a Dewey web service. Therefore, it seems that some general questions have to be answered first: What information does an agent have when coming to a Dewey web service? What kind of questions will such an agent ask? Placement of the {locale} component: It is still an open question if the {locale} component should be placed after the {version} component instead (<http://dewey.info/concept/338.4/edn/22/en>) to emphasize that the most important instantiation of a Dewey class is its edition, not its language version. From a services point of view, however, it could make more sense to keep the current arrangement, because users are more likely to come to the service with a present understanding of the language version they are seeking without knowing the specifics of a certain edition in which they are trying to find topics. Identification of other Dewey entities: The goal is to create a locator that does not answer all, but a lot of questions that could be asked about the DDC. Which entities are missing but should be surfaced for services or user agents? How will those services or agents interact with them? Should some entities be rendered in a different way as presented? For example, (how) should the DDC Summaries be retrievable? Would it be necessary to make the DDC Manual accessible through this identifier structure?"
    Date
    21. 3.2008 19:29:28
  16. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03321465 = product of:
      0.0664293 = sum of:
        0.056672662 = product of:
          0.17001799 = sum of:
            0.17001799 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.17001799 = score(doc=5669,freq=2.0), product of:
                0.36301607 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042818543 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.009756638 = product of:
          0.029269911 = sum of:
            0.029269911 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.029269911 = score(doc=5669,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  17. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.03
    0.03219355 = product of:
      0.0643871 = sum of:
        0.056652002 = weight(_text_:representation in 1163) [ClassicSimilarity], result of:
          0.056652002 = score(doc=1163,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.28756213 = fieldWeight in 1163, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=1163)
        0.0077350955 = product of:
          0.023205286 = sum of:
            0.023205286 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.023205286 = score(doc=1163,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  18. Bozzato, L.; Braghin, S.; Trombetta, A.: ¬A method and guidelines for the cooperation of ontologies and relational databases in Semantic Web applications (2012) 0.03
    0.031854097 = product of:
      0.06370819 = sum of:
        0.050073773 = weight(_text_:representation in 475) [ClassicSimilarity], result of:
          0.050073773 = score(doc=475,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 475, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=475)
        0.013634419 = product of:
          0.040903255 = sum of:
            0.040903255 = weight(_text_:theory in 475) [ClassicSimilarity], result of:
              0.040903255 = score(doc=475,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2297209 = fieldWeight in 475, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=475)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are a well-affirmed way of representing complex structured information and they provide a sound conceptual foundation to Semantic Web technologies. On the other hand, a huge amount of information available on the web is stored in legacy relational databases. The issues raised by the collaboration between such worlds are well known and addressed by consolidated mapping languages. Nevertheless, to the best of our knowledge, a best practice for such cooperation is missing: in this work we thus present a method to guide the definition of cooperations between ontology-based and relational databases systems. Our method, mainly based on ideas from knowledge reuse and re-engineering, is aimed at the separation of data between database and ontology instances and at the definition of suitable mappings in both directions, taking advantage of the representation possibilities offered by both models. We present the steps of our method along with guidelines for their application. Finally, we propose an example of its deployment in the context of a large repository of bio-medical images we developed.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  19. Pepper, S.; Groenmo, G.O.: Towards a general theory of scope (2002) 0.03
    0.031854097 = product of:
      0.06370819 = sum of:
        0.050073773 = weight(_text_:representation in 539) [ClassicSimilarity], result of:
          0.050073773 = score(doc=539,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
        0.013634419 = product of:
          0.040903255 = sum of:
            0.040903255 = weight(_text_:theory in 539) [ClassicSimilarity], result of:
              0.040903255 = score(doc=539,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2297209 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=539)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper is concerned with the issue of scope in topic maps. Topic maps are a form of knowledge representation suitable for solving a number of complex problems in the area of information management, ranging from findability (navigation and querying) to knowledge management and enterprise application integration (EAI). The topic map paradigm has its roots in efforts to understand the essential semantics of back-of-book indexes in order that they might be captured in a form suitable for computer processing. Once understood, the model of a back-of-book index was generalised in order to cover the needs of digital information, and extended to encompass glossaries and thesauri, as well as indexes. The resulting core model, of typed topics, associations, and occurrences, has many similarities with the semantic networks developed by the artificial intelligence community for representing knowledge structures. One key requirement of topic maps from the earliest days was to be able to merge indexes from disparate origins. This requirement accounts for two further concepts that greatly enhance the power of topic maps: subject identity and scope. This paper concentrates on scope, but also includes a brief discussion of the feature known as the topic naming constraint, with which it is closely related. It is based on the authors' experience in creating topic maps (in particular, the Italian Opera Topic Map, and in implementing processing systems for topic maps (in particular, the Ontopia Topic Map Engine and Navigator.
  20. Petras, V.: ¬The identity of information science (2023) 0.03
    0.031854097 = product of:
      0.06370819 = sum of:
        0.050073773 = weight(_text_:representation in 1077) [ClassicSimilarity], result of:
          0.050073773 = score(doc=1077,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.25417143 = fieldWeight in 1077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
        0.013634419 = product of:
          0.040903255 = sum of:
            0.040903255 = weight(_text_:theory in 1077) [ClassicSimilarity], result of:
              0.040903255 = score(doc=1077,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.2297209 = fieldWeight in 1077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1077)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Purpose This paper offers a definition of the core of information science, which encompasses most research in the field. The definition provides a unique identity for information science and positions it in the disciplinary universe. Design/methodology/approach After motivating the objective, a definition of the core and an explanation of its key aspects are provided. The definition is related to other definitions of information science before controversial discourse aspects are briefly addressed: discipline vs. field, science vs. humanities, library vs. information science and application vs. theory. Interdisciplinarity as an often-assumed foundation of information science is challenged. Findings Information science is concerned with how information is manifested across space and time. Information is manifested to facilitate and support the representation, access, documentation and preservation of ideas, activities, or practices, and to enable different types of interactions. Research and professional practice encompass the infrastructures - institutions and technology -and phenomena and practices around manifested information across space and time as its core contribution to the scholarly landscape. Information science collaborates with other disciplines to work on complex information problems that need multi- and interdisciplinary approaches to address them. Originality/value The paper argues that new information problems may change the core of the field, but throughout its existence, the discipline has remained quite stable in its central focus, yet proved to be highly adaptive to the tremendous changes in the forms, practices, institutions and technologies around and for manifested information.

Years

Languages

  • e 255
  • d 158
  • el 2
  • i 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 208
  • i 20
  • s 13
  • m 6
  • r 5
  • p 4
  • b 3
  • n 2
  • x 2
  • More… Less…