Search (4331 results, page 1 of 217)

  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.20
    0.20049646 = product of:
      0.45111704 = sum of:
        0.062223002 = product of:
          0.186669 = sum of:
            0.186669 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.186669 = score(doc=400,freq=2.0), product of:
                0.3321406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03917671 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.186669 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.186669 = score(doc=400,freq=2.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.015556021 = weight(_text_:of in 400) [ClassicSimilarity], result of:
          0.015556021 = score(doc=400,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.186669 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.186669 = score(doc=400,freq=2.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.44444445 = coord(4/9)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  2. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.20
    0.19673423 = product of:
      0.5902027 = sum of:
        0.062223002 = product of:
          0.186669 = sum of:
            0.186669 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.186669 = score(doc=484,freq=2.0), product of:
                0.3321406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03917671 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.26398984 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.26398984 = score(doc=484,freq=4.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.26398984 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.26398984 = score(doc=484,freq=4.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.33333334 = coord(3/9)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
  3. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.11
    0.11061867 = product of:
      0.49778402 = sum of:
        0.24889201 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24889201 = score(doc=2188,freq=2.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.24889201 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24889201 = score(doc=2188,freq=2.0), product of:
            0.3321406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03917671 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.22222222 = coord(2/9)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  4. Mayernik, M.S.; Hart, D.L.; Maull, K.E.; Weber, N.M.: Assessing and tracing the outcomes and impact of research infrastructures (2017) 0.06
    0.06488947 = product of:
      0.14600131 = sum of:
        0.041947264 = weight(_text_:applications in 3635) [ClassicSimilarity], result of:
          0.041947264 = score(doc=3635,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 3635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3635)
        0.018332949 = weight(_text_:of in 3635) [ClassicSimilarity], result of:
          0.018332949 = score(doc=3635,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 3635, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3635)
        0.034061253 = weight(_text_:software in 3635) [ClassicSimilarity], result of:
          0.034061253 = score(doc=3635,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 3635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3635)
        0.051659852 = product of:
          0.103319705 = sum of:
            0.103319705 = weight(_text_:packages in 3635) [ClassicSimilarity], result of:
              0.103319705 = score(doc=3635,freq=2.0), product of:
                0.2706874 = queryWeight, product of:
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.03917671 = queryNorm
                0.3816938 = fieldWeight in 3635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3635)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Recent policy shifts on the part of funding agencies and journal publishers are causing changes in the acknowledgment and citation behaviors of scholars. A growing emphasis on open science and reproducibility is changing how authors cite and acknowledge "research infrastructures"-entities that are used as inputs to or as underlying foundations for scholarly research, including data sets, software packages, computational models, observational platforms, and computing facilities. At the same time, stakeholder interest in quantitative understanding of impact is spurring increased collection and analysis of metrics related to use of research infrastructures. This article reviews work spanning several decades on tracing and assessing the outcomes and impacts from these kinds of research infrastructures. We discuss how research infrastructures are identified and referenced by scholars in the research literature and how those references are being collected and analyzed for the purposes of evaluating impact. Synthesizing common features of a wide range of studies, we identify notable challenges that impede the analysis of impact metrics for research infrastructures and outline key open research questions that can guide future research and applications related to such metrics.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.6, S.1341-1359
  5. Jaskolla, L.; Rugel, M.: Smart questions : steps towards an ontology of questions and answers (2014) 0.06
    0.06422604 = product of:
      0.14450859 = sum of:
        0.041947264 = weight(_text_:applications in 3404) [ClassicSimilarity], result of:
          0.041947264 = score(doc=3404,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 3404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.021169065 = weight(_text_:of in 3404) [ClassicSimilarity], result of:
          0.021169065 = score(doc=3404,freq=32.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34554482 = fieldWeight in 3404, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.068122506 = weight(_text_:software in 3404) [ClassicSimilarity], result of:
          0.068122506 = score(doc=3404,freq=8.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.43831247 = fieldWeight in 3404, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 3404) [ClassicSimilarity], result of:
              0.026539518 = score(doc=3404,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 3404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3404)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    The present essay is based on research funded by the German Ministry of Economics and Technology and carried out by the Munich School of Philosophy (Prof. Godehard Brüntrup) in cooperation with the IT company Comelio GmbH. It is concerned with setting up the philosophical framework for a systematic, hierarchical and categorical account of questions and answers in order to use this framework as an ontology for software engineers who create a tool for intelligent questionnaire design. In recent years, there has been considerable interest in programming software that enables users to create and carry out their own surveys. Considering the, to say the least, vast amount of areas of applications these software tools try to cover, it is surprising that most of the existing tools lack a systematic approach to what questions and answers really are and in what kind of systematic hierarchical relations different types of questions stand to each other. The theoretical background to this essay is inspired Barry Smith's theory of regional ontologies. The notion of ontology used in this essay can be defined by the following characteristics: (1) The basic notions of the ontology should be defined in a manner that excludes equivocations of any kind. They should also be presented in a way that allows for an easy translation into a semi-formal language, in order to secure easy applicability for software engineers. (2) The hierarchical structure of the ontology should be that of an arbor porphyriana.
    Date
    9. 2.2017 19:22:59
    Series
    History and philosophy of technoscience; 3
  6. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.06
    0.06416068 = product of:
      0.14436153 = sum of:
        0.08878562 = weight(_text_:applications in 4639) [ClassicSimilarity], result of:
          0.08878562 = score(doc=4639,freq=14.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.51477134 = fieldWeight in 4639, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.011975031 = weight(_text_:of in 4639) [ClassicSimilarity], result of:
          0.011975031 = score(doc=4639,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 4639, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.016351866 = weight(_text_:systems in 4639) [ClassicSimilarity], result of:
          0.016351866 = score(doc=4639,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1358164 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.027249003 = weight(_text_:software in 4639) [ClassicSimilarity], result of:
          0.027249003 = score(doc=4639,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.17532499 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.44444445 = coord(4/9)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
  7. Lacasta, J.; Falquet, G.; Nogueras Iso, J.N.; Zarazaga-Soria, J.: ¬A software processing chain for evaluating thesaurus quality (2017) 0.06
    0.06295541 = product of:
      0.14164966 = sum of:
        0.050336715 = weight(_text_:applications in 3485) [ClassicSimilarity], result of:
          0.050336715 = score(doc=3485,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 3485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
        0.0089812735 = weight(_text_:of in 3485) [ClassicSimilarity], result of:
          0.0089812735 = score(doc=3485,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.14660224 = fieldWeight in 3485, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
        0.0245278 = weight(_text_:systems in 3485) [ClassicSimilarity], result of:
          0.0245278 = score(doc=3485,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 3485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
        0.05780387 = weight(_text_:software in 3485) [ClassicSimilarity], result of:
          0.05780387 = score(doc=3485,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.3719205 = fieldWeight in 3485, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
      0.44444445 = coord(4/9)
    
    Abstract
    Thesauri are knowledge models commonly used for information classication and retrieval whose structure is dened by standards that describe the main features the concepts and relations must have. However, following these standards requires a deep knowledge of the field the thesaurus is going to cover and experience in their creation. To help in this task, this paper describes a software processing chain that provides dierent validation components that evaluates the quality of the main thesaurus features.
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; 10151
  8. Mainzer, K.: ¬The emergence of self-conscious systems : from symbolic AI to embodied robotics (2014) 0.06
    0.058544915 = product of:
      0.13172606 = sum of:
        0.041947264 = weight(_text_:applications in 3398) [ClassicSimilarity], result of:
          0.041947264 = score(doc=3398,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 3398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
        0.021169065 = weight(_text_:of in 3398) [ClassicSimilarity], result of:
          0.021169065 = score(doc=3398,freq=32.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34554482 = fieldWeight in 3398, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
        0.020439833 = weight(_text_:systems in 3398) [ClassicSimilarity], result of:
          0.020439833 = score(doc=3398,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 3398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
        0.048169892 = weight(_text_:software in 3398) [ClassicSimilarity], result of:
          0.048169892 = score(doc=3398,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 3398, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
      0.44444445 = coord(4/9)
    
    Abstract
    Knowledge representation, which is today used in database applications, artificial intelligence (AI), software engineering and many other disciplines of computer science has deep roots in logic and philosophy. In the beginning, there was Aristotle (384 bc-322 bc) who developed logic as a precise method for reasoning about knowledge. Syllogisms were introduced as formal patterns for representing special figures of logical deductions. According to Aristotle, the subject of ontology is the study of categories of things that exist or may exist in some domain. In modern times, Descartes considered the human brain as a store of knowledge representation. Recognition was made possible by an isomorphic correspondence between internal geometrical representations (ideae) and external situations and events. Leibniz was deeply influenced by these traditions. In his mathesis universalis, he required a universal formal language (lingua universalis) to represent human thinking by calculation procedures and to implement them by means of mechanical calculating machines. An ars iudicandi should allow every problem to be decided by an algorithm after representation in numeric symbols. An ars iveniendi should enable users to seek and enumerate desired data and solutions of problems. In the age of mechanics, knowledge representation was reduced to mechanical calculation procedures. In the twentieth century, computational cognitivism arose in the wake of Turing's theory of computability. In its functionalism, the hardware of a computer is related to the wetware of the human brain. The mind is understood as the software of a computer.
    Series
    History and philosophy of technoscience; 3
  9. Yang, B.; Rousseau, R.; Wang, X.; Huang, S.: How important is scientific software in bioinformatics research? : a comparative study between international and Chinese research communities (2018) 0.06
    0.05745585 = product of:
      0.17236754 = sum of:
        0.015876798 = weight(_text_:of in 4461) [ClassicSimilarity], result of:
          0.015876798 = score(doc=4461,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 4461, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4461)
        0.08343269 = weight(_text_:software in 4461) [ClassicSimilarity], result of:
          0.08343269 = score(doc=4461,freq=12.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.53682095 = fieldWeight in 4461, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4461)
        0.07305806 = product of:
          0.14611612 = sum of:
            0.14611612 = weight(_text_:packages in 4461) [ClassicSimilarity], result of:
              0.14611612 = score(doc=4461,freq=4.0), product of:
                0.2706874 = queryWeight, product of:
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.03917671 = queryNorm
                0.53979653 = fieldWeight in 4461, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4461)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Software programs are among the most important tools in data-driven research. The popularity of well-known packages and corresponding large numbers of citations received bear testimony of the contribution of scientific software to academic research. Yet software is not generally recognized as an academic outcome. In this study, a usage-based model is proposed with varied indicators including citations, mentions, and downloads to measure the importance of scientific software. We performed an investigation on a sample of international bioinformatics research articles, and on a sample from the Chinese community. Our analysis shows that scientists in the field of bioinformatics rely heavily on scientific software: the major differences between the international community and the Chinese example being how scientific packages are mentioned in publications and the time gap between the introduction of a package and its use. Biologists publishing in international journals tend to apply the latest tools earlier; Chinese scientists publishing in Chinese tend to follow later. Further, journals with higher impact factors tend to publish articles applying the latest tools earlier.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.9, S.1122-1133
  10. Kaminski, R.; Schaub, T.; Wanko, P.: ¬A tutorial on hybrid answer set solving with clingo (2017) 0.06
    0.056811333 = product of:
      0.1278255 = sum of:
        0.059322387 = weight(_text_:applications in 3937) [ClassicSimilarity], result of:
          0.059322387 = score(doc=3937,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 3937, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
        0.0140020205 = weight(_text_:of in 3937) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=3937,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 3937, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
        0.020439833 = weight(_text_:systems in 3937) [ClassicSimilarity], result of:
          0.020439833 = score(doc=3937,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 3937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
        0.034061253 = weight(_text_:software in 3937) [ClassicSimilarity], result of:
          0.034061253 = score(doc=3937,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 3937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
      0.44444445 = coord(4/9)
    
    Abstract
    Answer Set Programming (ASP) has become an established paradigm for Knowledge Representation and Reasoning, in particular, when it comes to solving knowledge-intense combinatorial (optimization) problems. ASP's unique pairing of a simple yet rich modeling language with highly performant solving technology has led to an increasing interest in ASP in academia as well as industry. To further boost this development and make ASP fit for real world applications it is indispensable to equip it with means for an easy integration into software environments and for adding complementary forms of reasoning. In this tutorial, we describe how both issues are addressed in the ASP system clingo. At first, we outline features of clingo's application programming interface (API) that are essential for multi-shot ASP solving, a technique for dealing with continuously changing logic programs. This is illustrated by realizing two exemplary reasoning modes, namely branch-and-bound-based optimization and incremental ASP solving. We then switch to the design of the API for integrating complementary forms of reasoning and detail this in an extensive case study dealing with the integration of difference constraints. We show how the syntax of these constraints is added to the modeling language and seamlessly merged into the grounding process. We then develop in detail a corresponding theory propagator for difference constraints and present how it is integrated into clingo's solving process.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  11. Colvin, E.; Kraft, D.H.: Fuzzy retrieval for software reuse (2016) 0.05
    0.053996317 = product of:
      0.16198894 = sum of:
        0.016567415 = weight(_text_:of in 3119) [ClassicSimilarity], result of:
          0.016567415 = score(doc=3119,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 3119, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3119)
        0.028615767 = weight(_text_:systems in 3119) [ClassicSimilarity], result of:
          0.028615767 = score(doc=3119,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 3119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3119)
        0.11680577 = weight(_text_:software in 3119) [ClassicSimilarity], result of:
          0.11680577 = score(doc=3119,freq=12.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.75154936 = fieldWeight in 3119, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3119)
      0.33333334 = coord(3/9)
    
    Abstract
    Finding software for reuse is a problem that programmers face. To reuse code that has been proven to work can increase any programmer's productivity, benefit corporate productivity, and also increase the stability of software programs. This paper shows that fuzzy retrieval has an improved retrieval performance over typical Boolean retrieval. Various methods of fuzzy information retrieval implementation and their use for software reuse will be examined. A deeper explanation of the fundamentals of designing a fuzzy information retrieval system for software reuse is presented. Future research options and necessary data storage systems are explored.
    Form
    Software
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2454-2463
  12. Choi, N.: Information systems attachment : an empirical exploration of its antecedents and its impact on community participation intention (2013) 0.05
    0.052851923 = product of:
      0.118916824 = sum of:
        0.041947264 = weight(_text_:applications in 1114) [ClassicSimilarity], result of:
          0.041947264 = score(doc=1114,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 1114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1114)
        0.0140020205 = weight(_text_:of in 1114) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=1114,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 1114, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1114)
        0.02890629 = weight(_text_:systems in 1114) [ClassicSimilarity], result of:
          0.02890629 = score(doc=1114,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.24009174 = fieldWeight in 1114, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1114)
        0.034061253 = weight(_text_:software in 1114) [ClassicSimilarity], result of:
          0.034061253 = score(doc=1114,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 1114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1114)
      0.44444445 = coord(4/9)
    
    Abstract
    With the increasing use of information systems (IS) in our everyday lives, people may feel an attachment to their software applications beyond simply perceiving them as a tool for enhancing task performance. However, attachment is still a largely unexplored concept in both IS research and practice. Drawing from the literature on attachment in consumer behavior research and auxiliary theories in IS use and community participation research, this study theoretically identifies and empirically explores the concept of attachment and its antecedents (i.e., relative visual aesthetics, personalization, relative performance) and outcome (i.e., community participation intention) in the IS context. Using web browsers as the target IS, an online survey was conducted. Results show that relative expressive visual aesthetics is the strongest antecedent of IS attachment, and that personalization is the second strongest antecedent of IS attachment, followed by relative performance. Furthermore, this study reveals that IS attachment has a strong positive impact on community participation intention. This study contributes theoretically and empirically to the body of IS use research and has managerial implications, suggesting that although superior performance is a necessary condition for attachment formation, improving users' experience through expressive visual aesthetics and personalization is critical to build strong attachment relationships with users.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2354-2365
  13. Martin, K.; Shilton, K.: Why experience matters to privacy : how context-based experience moderates consumer privacy expectations for mobile applications (2016) 0.05
    0.05253256 = product of:
      0.11819826 = sum of:
        0.0726548 = weight(_text_:applications in 3045) [ClassicSimilarity], result of:
          0.0726548 = score(doc=3045,freq=6.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.42124623 = fieldWeight in 3045, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3045)
        0.011833867 = weight(_text_:of in 3045) [ClassicSimilarity], result of:
          0.011833867 = score(doc=3045,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19316542 = fieldWeight in 3045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3045)
        0.020439833 = weight(_text_:systems in 3045) [ClassicSimilarity], result of:
          0.020439833 = score(doc=3045,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 3045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3045)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 3045) [ClassicSimilarity], result of:
              0.026539518 = score(doc=3045,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 3045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3045)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Two dominant theoretical models for privacy-individual privacy preferences and context-dependent definitions of privacy-are often studied separately in information systems research. This paper unites these theories by examining how individual privacy preferences impact context-dependent privacy expectations. The paper theorizes that experience provides a bridge between individuals' general privacy attitudes and nuanced contextual factors. This leads to the hypothesis that, when making judgments about privacy expectations, individuals with less experience in a context rely more on individual preferences such as their generalized privacy beliefs, whereas individuals with more experience in a context are influenced by contextual factors and norms. To test this hypothesis, 1,925 American users of mobile applications made judgments about whether varied real-world scenarios involving data collection and use met their privacy expectations. Analysis of the data suggests that experience using mobile applications did moderate the effect of individual preferences and contextual factors on privacy judgments. Experience changed the equation respondents used to assess whether data collection and use scenarios met their privacy expectations. Discovering the bridge between 2 dominant theoretical models enables future privacy research to consider both personal and contextual variables by taking differences in experience into account.
    Date
    20. 7.2016 18:22:57
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.8, S.1871-1882
  14. O'Brien, H.L.; Toms, E.G.: ¬The development and evaluation of a survey to measure user engagement (2010) 0.05
    0.05134662 = product of:
      0.11552989 = sum of:
        0.041947264 = weight(_text_:applications in 3312) [ClassicSimilarity], result of:
          0.041947264 = score(doc=3312,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 3312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3312)
        0.019081537 = weight(_text_:of in 3312) [ClassicSimilarity], result of:
          0.019081537 = score(doc=3312,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31146988 = fieldWeight in 3312, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3312)
        0.020439833 = weight(_text_:systems in 3312) [ClassicSimilarity], result of:
          0.020439833 = score(doc=3312,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 3312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3312)
        0.034061253 = weight(_text_:software in 3312) [ClassicSimilarity], result of:
          0.034061253 = score(doc=3312,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 3312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3312)
      0.44444445 = coord(4/9)
    
    Abstract
    Facilitating engaging user experiences is essential in the design of interactive systems. To accomplish this, it is necessary to understand the composition of this construct and how to evaluate it. Building on previous work that posited a theory of engagement and identified a core set of attributes that operationalized this construct, we constructed and evaluated a multidimensional scale to measure user engagement. In this paper we describe the development of the scale, as well as two large-scale studies (N=440 and N=802) that were undertaken to assess its reliability and validity in online shopping environments. In the first we used Reliability Analysis and Exploratory Factor Analysis to identify six attributes of engagement: Perceived Usability, Aesthetics, Focused Attention, Felt Involvement, Novelty, and Endurability. In the second we tested the validity of and relationships among those attributes using Structural Equation Modeling. The result of this research is a multidimensional scale that may be used to test the engagement of software applications. In addition, findings indicate that attributes of engagement are highly intertwined, a complex interplay of user-system interaction variables. Notably, Perceived Usability played a mediating role in the relationship between Endurability and Novelty, Aesthetics, Felt Involvement, and Focused Attention.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.1, S.50-69
  15. Rehurek, R.; Sojka, P.: Software framework for topic modelling with large corpora (2010) 0.05
    0.046727955 = product of:
      0.14018387 = sum of:
        0.050336715 = weight(_text_:applications in 1058) [ClassicSimilarity], result of:
          0.050336715 = score(doc=1058,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 1058, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=1058)
        0.019052157 = weight(_text_:of in 1058) [ClassicSimilarity], result of:
          0.019052157 = score(doc=1058,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 1058, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1058)
        0.070794985 = weight(_text_:software in 1058) [ClassicSimilarity], result of:
          0.070794985 = score(doc=1058,freq=6.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.4555077 = fieldWeight in 1058, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=1058)
      0.33333334 = coord(3/9)
    
    Abstract
    Large corpora are ubiquitous in today's world and memory quickly becomes the limiting factor in practical applications of the Vector Space Model (VSM). In this paper, we identify a gap in existing implementations of many of the popular algorithms, which is their scalability and ease of use. We describe a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion. Within this framework, we implement several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation, in a way that makes them completely independent of the training corpus size. Particular emphasis is placed on straightforward and intuitive framework design, so that modifications and extensions of the methods and/or their application by interested practitioners are effortless. We demonstrate the usefulness of our approach on a real-world scenario of computing document similarities within an existing digital library DML-CZ.
    Content
    Für die Software, vgl.: http://radimrehurek.com/gensim/index.html. Für eine Demo, vgl.: http://dml.cz/handle/10338.dmlcz/100785/SimilarArticles.
  16. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.05
    0.045119576 = product of:
      0.13535872 = sum of:
        0.08389453 = weight(_text_:applications in 2027) [ClassicSimilarity], result of:
          0.08389453 = score(doc=2027,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
        0.010584532 = weight(_text_:of in 2027) [ClassicSimilarity], result of:
          0.010584532 = score(doc=2027,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17277241 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
        0.040879667 = weight(_text_:systems in 2027) [ClassicSimilarity], result of:
          0.040879667 = score(doc=2027,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.339541 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
      0.33333334 = coord(3/9)
    
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; Bd. 9088
  17. Benson, A.C.: OntoPhoto and the role of ontology in organizing knowledge (2011) 0.04
    0.04481437 = product of:
      0.1344431 = sum of:
        0.07118686 = weight(_text_:applications in 4556) [ClassicSimilarity], result of:
          0.07118686 = score(doc=4556,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 4556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=4556)
        0.014200641 = weight(_text_:of in 4556) [ClassicSimilarity], result of:
          0.014200641 = score(doc=4556,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 4556, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4556)
        0.0490556 = weight(_text_:systems in 4556) [ClassicSimilarity], result of:
          0.0490556 = score(doc=4556,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4074492 = fieldWeight in 4556, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4556)
      0.33333334 = coord(3/9)
    
    Abstract
    This article is concerned with ontology and its applications in Knowledge Organization (KO) activities. Connections are drawn between efforts in artificial intelligence (AI) to capture the meaning of information and make it accessible to machines and the efforts made in libraries to use KO tools in machine-based record building and search and retrieval systems. The practices used in AI that are of interest here include ontology and ontology-based knowledge representation. In this article their applications in KO are directed towards a particularly problematic document type-the photograph. There are two arguments motivating this article. First, ontology-based KO systems that join AI techniques with library cataloging practices make it possible to utilize higher levels of expressivity when describing photographs. Second, KO systems for photographs that are capable of reasoning over concepts and relationships can potentially provide richer, more relevant search results than systems utilizing word-matching alone.
  18. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.04
    0.04371041 = product of:
      0.13113123 = sum of:
        0.08389453 = weight(_text_:applications in 3297) [ClassicSimilarity], result of:
          0.08389453 = score(doc=3297,freq=8.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 3297, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.011833867 = weight(_text_:of in 3297) [ClassicSimilarity], result of:
          0.011833867 = score(doc=3297,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19316542 = fieldWeight in 3297, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.03540283 = weight(_text_:systems in 3297) [ClassicSimilarity], result of:
          0.03540283 = score(doc=3297,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.29405114 = fieldWeight in 3297, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
      0.33333334 = coord(3/9)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
    Source
    IEEE intelligent systems 2002, Jan./Feb., S.54-60
  19. Sánchez, J.A.; Medina, M.A.; Starostenko, O.; Benitez, A.; Domínguez, E.L.: Organizing open archives via lightweight ontologies to facilitate the use of heterogeneous collections (2012) 0.04
    0.043704174 = product of:
      0.13111252 = sum of:
        0.07118686 = weight(_text_:applications in 301) [ClassicSimilarity], result of:
          0.07118686 = score(doc=301,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 301, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=301)
        0.019052157 = weight(_text_:of in 301) [ClassicSimilarity], result of:
          0.019052157 = score(doc=301,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 301, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=301)
        0.040873505 = weight(_text_:software in 301) [ClassicSimilarity], result of:
          0.040873505 = score(doc=301,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 301, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=301)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - This paper seeks to focus on the problems of integrating information from open, distributed scholarly collections, and on the opportunities these collections represent for research communities in developing countries. The paper aims to introduce OntOAIr, a semi-automatic method for constructing lightweight ontologies of documents in repositories such as those provided by the Open Archives Initiative (OAI). Design/methodology/approach - OntOAIr uses simplified document representations, a clustering algorithm, and ontological engineering techniques. Findings - The paper presents experimental results of the potential positive impact of ontologies and specifically of OntOAIr on the use of collections provided by OAI. Research limitations/implications - By applying OntOAIr, scholars who frequently spend many hours organizing OAI information spaces will obtain support that will allow them to speed up the entire research cycle and, expectedly, participate more fully in global research communities. Originality/value - The proposed method allows human and software agents to organize and retrieve groups of documents from multiple collections. Applications of OntOAIr include enhanced document retrieval. In this paper, the authors focus particularly on document retrieval applications.
  20. Buchel, O; Hill, L.L.: Treatment of georeferencing in knowledge organization systems : North American contributions to integrated georeferencing (2010) 0.04
    0.04178024 = product of:
      0.12534072 = sum of:
        0.06711562 = weight(_text_:applications in 3305) [ClassicSimilarity], result of:
          0.06711562 = score(doc=3305,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 3305, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=3305)
        0.011975031 = weight(_text_:of in 3305) [ClassicSimilarity], result of:
          0.011975031 = score(doc=3305,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 3305, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3305)
        0.046250064 = weight(_text_:systems in 3305) [ClassicSimilarity], result of:
          0.046250064 = score(doc=3305,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 3305, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=3305)
      0.33333334 = coord(3/9)
    
    Abstract
    Pioneering research projects in North America that have advanced the integration of formal mathematical georeferencing and informal placename georeferencing in knowledge organization systems are described and related to visualization applications.

Languages

  • e 4005
  • d 309
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • sp 1
  • More… Less…

Types

  • el 256
  • b 5
  • s 1
  • x 1
  • More… Less…

Themes

Classifications