Search (377 results, page 1 of 19)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.10
    0.09635385 = product of:
      0.4496513 = sum of:
        0.049961254 = product of:
          0.19984502 = sum of:
            0.19984502 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.19984502 = score(doc=1826,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
        0.19984502 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.19984502 = score(doc=1826,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.19984502 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.19984502 = score(doc=1826,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.21428572 = coord(3/14)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.08
    0.07708308 = product of:
      0.35972103 = sum of:
        0.039969005 = product of:
          0.15987602 = sum of:
            0.15987602 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.15987602 = score(doc=230,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.25 = coord(1/4)
        0.15987602 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.15987602 = score(doc=230,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.15987602 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.15987602 = score(doc=230,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.21428572 = coord(3/14)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.05
    0.048176926 = product of:
      0.22482565 = sum of:
        0.024980627 = product of:
          0.09992251 = sum of:
            0.09992251 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.09992251 = score(doc=4388,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.25 = coord(1/4)
        0.09992251 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.09992251 = score(doc=4388,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.09992251 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.09992251 = score(doc=4388,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.21428572 = coord(3/14)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.02
    0.019829482 = product of:
      0.27761275 = sum of:
        0.27761275 = weight(_text_:intentionality in 5205) [ClassicSimilarity], result of:
          0.27761275 = score(doc=5205,freq=4.0), product of:
            0.23640947 = queryWeight, product of:
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.025165197 = queryNorm
            1.1742878 = fieldWeight in 5205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.0625 = fieldNorm(doc=5205)
      0.071428575 = coord(1/14)
    
    Abstract
    We provide a model for the reception of knowledge from externalized information sources. The model is based on a cognitive understanding of information processing and draws up ideas of an exchange of information in communication processes. Karl Popper's three-world theory with its orientation on falsifiable scientific knowledge is extended by John Searle's concept of collective intentionality. This allows a consistent description of externalization and reception of knowledge including scientific knowledge as well as everyday knowledge.
  5. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.01
    0.013504505 = product of:
      0.09453153 = sum of:
        0.08155664 = weight(_text_:representation in 3671) [ClassicSimilarity], result of:
          0.08155664 = score(doc=3671,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.7043805 = fieldWeight in 3671, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
        0.012974886 = product of:
          0.038924657 = sum of:
            0.038924657 = weight(_text_:29 in 3671) [ClassicSimilarity], result of:
              0.038924657 = score(doc=3671,freq=4.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.43971092 = fieldWeight in 3671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Semantic networks produced from human data have statistical properties that cannot be easily captured by spatial representations. We explore a probabilistic approach to semantic representation that explicitly models the probability with which words occurin diffrent contexts, and hence captures the probabilistic relationships between words. We show that this representation has statistical properties consistent with the large-scale structure of semantic networks constructed by humans, and trace the origins of these properties.
    Date
    29. 6.2015 14:55:01
    29. 6.2015 16:09:05
  6. Priss, U.: Faceted knowledge representation (1999) 0.01
    0.0129082 = product of:
      0.09035739 = sum of:
        0.082401805 = weight(_text_:representation in 2654) [ClassicSimilarity], result of:
          0.082401805 = score(doc=2654,freq=8.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.71167994 = fieldWeight in 2654, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.023866756 = score(doc=2654,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  7. Furner, J.: User tagging of library resources : toward a framework for system evaluation (2007) 0.01
    0.011151672 = product of:
      0.0780617 = sum of:
        0.07118073 = weight(_text_:mental in 703) [ClassicSimilarity], result of:
          0.07118073 = score(doc=703,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.43302125 = fieldWeight in 703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.046875 = fieldNorm(doc=703)
        0.006880972 = product of:
          0.020642916 = sum of:
            0.020642916 = weight(_text_:29 in 703) [ClassicSimilarity], result of:
              0.020642916 = score(doc=703,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23319192 = fieldWeight in 703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=703)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Although user tagging of library resources shows substantial promise as a means of improving the quality of users' access to those resources, several important questions about the level and nature of the warrant for basing retrieval tools on user tagging are yet to receive full consideration by library practitioners and researchers. Among these is the simple evaluative question: What, specifically, are the factors that determine whether or not user-tagging services will be successful? If success is to be defined in terms of the effectiveness with which systems perform the particular functions expected of them (rather than simply in terms of popularity), an understanding is needed both of the multifunctional nature of tagging tools, and of the complex nature of users' mental models of that multifunctionality. In this paper, a conceptual framework is developed for the evaluation of systems that integrate user tagging with more traditional methods of library resource description.
    Date
    26.12.2011 13:29:31
  8. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.010811832 = product of:
      0.07568282 = sum of:
        0.06659072 = weight(_text_:representation in 318) [ClassicSimilarity], result of:
          0.06659072 = score(doc=318,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.57512426 = fieldWeight in 318, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=318)
        0.009092098 = product of:
          0.027276294 = sum of:
            0.027276294 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.027276294 = score(doc=318,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    In der Session "Knowledge Representation" auf der ISI 2021 wurden unter der Moderation von Jürgen Reischer (Uni Regensburg) drei Projekte vorgestellt, in denen Knowledge Representation mit RDF umgesetzt wird. Die Domänen sind erfreulich unterschiedlich, die gemeinsame Klammer indes ist die Absicht, den Zugang zu Forschungsdaten zu verbessern: - Japanese Visual Media Graph - Taxonomy of Digital Research Activities in the Humanities - Forschungsdaten im konzeptuellen Modell von FRBR
    Date
    22. 5.2021 12:43:05
  9. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.01
    0.010142457 = product of:
      0.07099719 = sum of:
        0.047453817 = weight(_text_:mental in 758) [ClassicSimilarity], result of:
          0.047453817 = score(doc=758,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.28868082 = fieldWeight in 758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.023543375 = weight(_text_:representation in 758) [ClassicSimilarity], result of:
          0.023543375 = score(doc=758,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
      0.14285715 = coord(2/14)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  10. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.01
    0.009721208 = product of:
      0.068048455 = sum of:
        0.061167482 = weight(_text_:representation in 4641) [ClassicSimilarity], result of:
          0.061167482 = score(doc=4641,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.5282854 = fieldWeight in 4641, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.006880972 = product of:
          0.020642916 = sum of:
            0.020642916 = weight(_text_:29 in 4641) [ClassicSimilarity], result of:
              0.020642916 = score(doc=4641,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23319192 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Date
    29. 7.2011 14:44:56
  11. Priss, U.: Description logic and faceted knowledge representation (1999) 0.01
    0.009712365 = product of:
      0.067986555 = sum of:
        0.061167482 = weight(_text_:representation in 2655) [ClassicSimilarity], result of:
          0.061167482 = score(doc=2655,freq=6.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.5282854 = fieldWeight in 2655, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.006819073 = product of:
          0.02045722 = sum of:
            0.02045722 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.02045722 = score(doc=2655,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  12. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.01
    0.009460352 = product of:
      0.06622247 = sum of:
        0.058266878 = weight(_text_:representation in 540) [ClassicSimilarity], result of:
          0.058266878 = score(doc=540,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.50323373 = fieldWeight in 540, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=540)
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.023866756 = score(doc=540,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  13. Klic, L.; Miller, M.; Nelson, J.K.; Pattuelli, C.; Provo, A.: ¬The drawings of the Florentine painters : from print catalog to linked open data (2017) 0.01
    0.008485726 = product of:
      0.118800156 = sum of:
        0.118800156 = weight(_text_:1938 in 4105) [ClassicSimilarity], result of:
          0.118800156 = score(doc=4105,freq=2.0), product of:
            0.21236381 = queryWeight, product of:
              8.43879 = idf(docFreq=25, maxDocs=44218)
              0.025165197 = queryNorm
            0.5594181 = fieldWeight in 4105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.43879 = idf(docFreq=25, maxDocs=44218)
              0.046875 = fieldNorm(doc=4105)
      0.071428575 = coord(1/14)
    
    Abstract
    The Drawings of The Florentine Painters project created the first online database of Florentine Renaissance drawings by applying Linked Open Data (LOD) techniques to a foundational text of the same name, first published by Bernard Berenson in 1903 (revised and expanded editions, 1938 and 1961). The goal was to make Berenson's catalog information-still an essential information resource today-available in a machine-readable format, allowing researchers to access the source content through open data services. This paper provides a technical overview of the methods and processes applied in the conversion of Berenson's catalog to LOD using the CIDOC-CRM ontology; it also discusses the different phases of the project, focusing on the challenges and issues of data transformation and publishing. The project was funded by the Samuel H. Kress Foundation and organized by Villa I Tatti, The Harvard University Center for Italian Renaissance Studies. Catalog: http://florentinedrawings.itatti.harvard.edu. Data Endpoint: http://data.itatti.harvard.edu.
  14. Cossham, A.F.: Models of the bibliographic universe (2017) 0.01
    0.00838873 = product of:
      0.117442206 = sum of:
        0.117442206 = weight(_text_:mental in 3817) [ClassicSimilarity], result of:
          0.117442206 = score(doc=3817,freq=4.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.7144486 = fieldWeight in 3817, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3817)
      0.071428575 = coord(1/14)
    
    Abstract
    What kinds of mental models do library catalogue users have of the bibliographic universe in an age of online and electronic information? Using phenomenography and grounded analysis, it identifies participants' understanding, experience, and conceptualisation of the bibliographic universe, and identifies their expectations when using library catalogues. It contrasts participants' mental models with existing LIS models, and explores the nature of the bibliographic universe. The bibliographic universe can be considered to be a social object that exists because it is inscribed in catalogue records, cataloguing codes, bibliographies, and other bibliographic tools. It is a socially constituted phenomenon.
  15. Metzinger, T.: Why Is Virtual Reality interesting for philosophers? (2018) 0.01
    0.008267534 = product of:
      0.11574547 = sum of:
        0.11574547 = weight(_text_:phenomenology in 229) [ClassicSimilarity], result of:
          0.11574547 = score(doc=229,freq=2.0), product of:
            0.20961581 = queryWeight, product of:
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.025165197 = queryNorm
            0.5521791 = fieldWeight in 229, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.046875 = fieldNorm(doc=229)
      0.071428575 = coord(1/14)
    
    Abstract
    This article explores promising points of contact between philosophy and the expanding field of virtual reality research. Aiming at an interdisciplinary audience, it proposes a series of new research targets by presenting a range of concrete examples characterized by high theoretical relevance and heuristic fecundity. Among these examples are conscious experience itself, "Bayesian" and social VR, amnestic re-embodiment, merging human-controlled avatars and virtual agents, virtual ego-dissolution, controlling the reality/virtuality continuum, the confluence of VR and artificial intelligence (AI) as well as of VR and functional magnetic resonance imaging (fMRI), VR-based social hallucinations and the emergence of a virtual Lebenswelt, religious faith and practical phenomenology. Hopefully, these examples can serve as first proposals for intensified future interaction and mark out some potential new directions for research.
  16. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.0073820096 = product of:
      0.051674064 = sum of:
        0.04708675 = weight(_text_:representation in 4639) [ClassicSimilarity], result of:
          0.04708675 = score(doc=4639,freq=8.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.40667427 = fieldWeight in 4639, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.004587315 = product of:
          0.013761944 = sum of:
            0.013761944 = weight(_text_:29 in 4639) [ClassicSimilarity], result of:
              0.013761944 = score(doc=4639,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.15546128 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Date
    29. 7.2011 14:44:56
  17. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.0070326724 = product of:
      0.049228705 = sum of:
        0.041200902 = weight(_text_:representation in 4644) [ClassicSimilarity], result of:
          0.041200902 = score(doc=4644,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.024083402 = score(doc=4644,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Date
    29. 7.2011 14:44:56
  18. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.01
    0.0070326724 = product of:
      0.049228705 = sum of:
        0.041200902 = weight(_text_:representation in 156) [ClassicSimilarity], result of:
          0.041200902 = score(doc=156,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 156) [ClassicSimilarity], result of:
              0.024083402 = score(doc=156,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
    Date
    29. 5.2012 10:17:08
  19. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.01
    0.0068896124 = product of:
      0.09645457 = sum of:
        0.09645457 = weight(_text_:phenomenology in 1048) [ClassicSimilarity], result of:
          0.09645457 = score(doc=1048,freq=2.0), product of:
            0.20961581 = queryWeight, product of:
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.025165197 = queryNorm
            0.4601493 = fieldWeight in 1048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
      0.071428575 = coord(1/14)
    
    Source
    Phenomenology and the cognitive sciences [https://doi.org/10.1007/s11097-022-09863-1]
  20. Nielsen, R.D.; Ward, W.; Martin, J.H.; Palmer, M.: Extracting a representation from text for semantic analysis (2008) 0.01
    0.006726679 = product of:
      0.0941735 = sum of:
        0.0941735 = weight(_text_:representation in 3365) [ClassicSimilarity], result of:
          0.0941735 = score(doc=3365,freq=8.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.81334853 = fieldWeight in 3365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3365)
      0.071428575 = coord(1/14)
    
    Abstract
    We present a novel fine-grained semantic representation of text and an approach to constructing it. This representation is largely extractable by today's technologies and facilitates more detailed semantic analysis. We discuss the requirements driving the representation, suggest how it might be of value in the automated tutoring domain, and provide evidence of its validity.

Years

Languages

  • e 213
  • d 157
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 181
  • i 20
  • m 6
  • s 6
  • r 4
  • b 3
  • p 3
  • n 2
  • x 1
  • More… Less…