Search (713 results, page 1 of 36)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.59
    0.58900064 = product of:
      1.0307511 = sum of:
        0.10307511 = product of:
          0.30922532 = sum of:
            0.30922532 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.30922532 = score(doc=1826,freq=2.0), product of:
                0.3301232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038938753 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.30922532 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.30922532 = score(doc=1826,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.30922532 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.30922532 = score(doc=1826,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.30922532 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.30922532 = score(doc=1826,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5714286 = coord(4/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.47
    0.47120053 = product of:
      0.8246009 = sum of:
        0.08246009 = product of:
          0.24738026 = sum of:
            0.24738026 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24738026 = score(doc=230,freq=2.0), product of:
                0.3301232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038938753 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24738026 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24738026 = score(doc=230,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.24738026 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24738026 = score(doc=230,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.24738026 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24738026 = score(doc=230,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5714286 = coord(4/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.29
    0.29450032 = product of:
      0.51537555 = sum of:
        0.051537555 = product of:
          0.15461266 = sum of:
            0.15461266 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15461266 = score(doc=4388,freq=2.0), product of:
                0.3301232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038938753 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15461266 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15461266 = score(doc=4388,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.15461266 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15461266 = score(doc=4388,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.15461266 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15461266 = score(doc=4388,freq=2.0), product of:
            0.3301232 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038938753 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Lewandowski, D.: Wie "Next Generation Search Systems" die Suche auf eine neue Ebene heben und die Informationswelt verändern (2017) 0.04
    0.039915223 = product of:
      0.13970327 = sum of:
        0.11971715 = weight(_text_:interactions in 3611) [ClassicSimilarity], result of:
          0.11971715 = score(doc=3611,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.5212963 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0625 = fieldNorm(doc=3611)
        0.019986123 = weight(_text_:with in 3611) [ClassicSimilarity], result of:
          0.019986123 = score(doc=3611,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 3611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=3611)
      0.2857143 = coord(2/7)
    
    Footnote
    Bezug zum Buch: White, R.: Interactions with search systems. New York ; Cambridge University Press ; 2016.
  5. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.04
    0.036635246 = product of:
      0.12822336 = sum of:
        0.010599243 = weight(_text_:with in 405) [ClassicSimilarity], result of:
          0.010599243 = score(doc=405,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.112958014 = fieldWeight in 405, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.11762412 = sum of:
          0.10179713 = weight(_text_:humans in 405) [ClassicSimilarity], result of:
            0.10179713 = score(doc=405,freq=6.0), product of:
              0.26276368 = queryWeight, product of:
                6.7481275 = idf(docFreq=140, maxDocs=44218)
                0.038938753 = queryNorm
              0.38740945 = fieldWeight in 405, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                6.7481275 = idf(docFreq=140, maxDocs=44218)
                0.0234375 = fieldNorm(doc=405)
          0.015826989 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
            0.015826989 = score(doc=405,freq=2.0), product of:
              0.13635688 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.038938753 = queryNorm
              0.116070345 = fieldWeight in 405, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=405)
      0.2857143 = coord(2/7)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  6. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.03
    0.033071604 = product of:
      0.1157506 = sum of:
        0.08978786 = weight(_text_:interactions in 4834) [ClassicSimilarity], result of:
          0.08978786 = score(doc=4834,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 4834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=4834)
        0.025962738 = weight(_text_:with in 4834) [ClassicSimilarity], result of:
          0.025962738 = score(doc=4834,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2766895 = fieldWeight in 4834, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4834)
      0.2857143 = coord(2/7)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
  7. Halpin, H.; Hayes, P.J.; McCusker, J.P.; McGuinness, D.L.; Thompson, H.S.: When owl:sameAs isn't the same : an analysis of identity in linked data (2010) 0.03
    0.031710386 = product of:
      0.11098635 = sum of:
        0.08978786 = weight(_text_:interactions in 4703) [ClassicSimilarity], result of:
          0.08978786 = score(doc=4703,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 4703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=4703)
        0.021198487 = weight(_text_:with in 4703) [ClassicSimilarity], result of:
          0.021198487 = score(doc=4703,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.22591603 = fieldWeight in 4703, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4703)
      0.2857143 = coord(2/7)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
  8. Panzer, M.: Designing identifiers for the DDC (2007) 0.03
    0.031691916 = product of:
      0.110921696 = sum of:
        0.016758876 = weight(_text_:with in 1752) [ClassicSimilarity], result of:
          0.016758876 = score(doc=1752,freq=10.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.17860231 = fieldWeight in 1752, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1752)
        0.09416282 = sum of:
          0.058772597 = weight(_text_:humans in 1752) [ClassicSimilarity], result of:
            0.058772597 = score(doc=1752,freq=2.0), product of:
              0.26276368 = queryWeight, product of:
                6.7481275 = idf(docFreq=140, maxDocs=44218)
                0.038938753 = queryNorm
              0.22367093 = fieldWeight in 1752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7481275 = idf(docFreq=140, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1752)
          0.035390228 = weight(_text_:22 in 1752) [ClassicSimilarity], result of:
            0.035390228 = score(doc=1752,freq=10.0), product of:
              0.13635688 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.038938753 = queryNorm
              0.2595412 = fieldWeight in 1752, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1752)
      0.2857143 = coord(2/7)
    
    Content
    "Although the Dewey Decimal Classification is currently available on the web to subscribers as WebDewey and Abridged WebDewey in the OCLC Connexion service and in an XML version to licensees, OCLC does not provide any "web services" based on the DDC. By web services, we mean presentation of the DDC to other machines (not humans) for uses such as searching, browsing, classifying, mapping, harvesting, and alerting. In order to build web-accessible services based on the DDC, several elements have to be considered. One of these elements is the design of an appropriate Uniform Resource Identifier (URI) structure for Dewey. The design goals of mapping the entity model of the DDC into an identifier space can be summarized as follows: * Common locator for Dewey concepts and associated resources for use in web services and web applications * Use-case-driven, but not directly related to and outlasting a specific use case (persistency) * Retraceable path to a concept rather than an abstract identification, reusing a means of identification that is already present in the DDC and available in existing metadata. We have been working closely with our colleagues in the OCLC Office of Research (especially Andy Houghton as well as Eric Childress, Diane Vizine-Goetz, and Stu Weibel) on a preliminary identifier syntax. The basic identifier format we are currently exploring is: http://dewey.info/{aspect}/{object}/{locale}/{type}/{version}/{resource} where * {aspect} is the aspect associated with an {object}-the current value set of aspect contains "concept", "scheme", and "index"; additional ones are under exploration * {object} is a type of {aspect} * {locale} identifies a Dewey translation * {type} identifies a Dewey edition type and contains, at a minimum, the values "edn" for the full edition or "abr" for the abridged edition * {version} identifies a Dewey edition version * {resource} identifies a resource associated with an {object} in the context of {locale}, {type}, and {version}
    Some examples of identifiers for concepts follow: <http://dewey.info/concept/338.4/en/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the English-language version of Edition 22. <http://dewey.info/concept/338.4/de/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the German-language version of Edition 22. <http://dewey.info/concept/333.7-333.9/> This identifier is used to retrieve or identify the 333.7-333.9 concept across all editions and language versions. <http://dewey.info/concept/333.7-333.9/about.skos> This identifier is used to retrieve a SKOS representation of the 333.7-333.9 concept (using the "resource" element). There are several open issues at this preliminary stage of development: Use cases: URIs need to represent the range of statements or questions that could be submitted to a Dewey web service. Therefore, it seems that some general questions have to be answered first: What information does an agent have when coming to a Dewey web service? What kind of questions will such an agent ask? Placement of the {locale} component: It is still an open question if the {locale} component should be placed after the {version} component instead (<http://dewey.info/concept/338.4/edn/22/en>) to emphasize that the most important instantiation of a Dewey class is its edition, not its language version. From a services point of view, however, it could make more sense to keep the current arrangement, because users are more likely to come to the service with a present understanding of the language version they are seeking without knowing the specifics of a certain edition in which they are trying to find topics. Identification of other Dewey entities: The goal is to create a locator that does not answer all, but a lot of questions that could be asked about the DDC. Which entities are missing but should be surfaced for services or user agents? How will those services or agents interact with them? Should some entities be rendered in a different way as presented? For example, (how) should the DDC Summaries be retrievable? Would it be necessary to make the DDC Manual accessible through this identifier structure?"
  9. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.03
    0.030465176 = product of:
      0.10662811 = sum of:
        0.02826465 = weight(_text_:with in 3671) [ClassicSimilarity], result of:
          0.02826465 = score(doc=3671,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.30122137 = fieldWeight in 3671, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
        0.07836346 = product of:
          0.15672693 = sum of:
            0.15672693 = weight(_text_:humans in 3671) [ClassicSimilarity], result of:
              0.15672693 = score(doc=3671,freq=2.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.5964558 = fieldWeight in 3671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Semantic networks produced from human data have statistical properties that cannot be easily captured by spatial representations. We explore a probabilistic approach to semantic representation that explicitly models the probability with which words occurin diffrent contexts, and hence captures the probabilistic relationships between words. We show that this representation has statistical properties consistent with the large-scale structure of semantic networks constructed by humans, and trace the origins of these properties.
  10. Altmann, E.G.; Cristadoro, G.; Esposti, M.D.: On the origin of long-range correlations in texts (2012) 0.03
    0.029936418 = product of:
      0.104777455 = sum of:
        0.08978786 = weight(_text_:interactions in 330) [ClassicSimilarity], result of:
          0.08978786 = score(doc=330,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=330)
        0.014989593 = weight(_text_:with in 330) [ClassicSimilarity], result of:
          0.014989593 = score(doc=330,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=330)
      0.2857143 = coord(2/7)
    
    Abstract
    The complexity of human interactions with social and natural phenomena is mirrored in the way we describe our experiences through natural language. In order to retain and convey such a high dimensional information, the statistical properties of our linguistic output has to be highly correlated in time. An example are the robust observations, still largely not understood, of correlations on arbitrary long scales in literary texts. In this paper we explain how long-range correlations flow from highly structured linguistic levels down to the building blocks of a text (words, letters, etc..). By combining calculations and data analysis we show that correlations take form of a bursty sequence of events once we approach the semantically relevant topics of the text. The mechanisms we identify are fairly general and can be equally applied to other hierarchical settings.
  11. Giunchiglia, F.; Zaihrayeu, I.; Farazi, F.: Converting classifications into OWL ontologies (2009) 0.03
    0.029804429 = product of:
      0.1043155 = sum of:
        0.021198487 = weight(_text_:with in 4690) [ClassicSimilarity], result of:
          0.021198487 = score(doc=4690,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.22591603 = fieldWeight in 4690, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
        0.08311701 = product of:
          0.16623402 = sum of:
            0.16623402 = weight(_text_:humans in 4690) [ClassicSimilarity], result of:
              0.16623402 = score(doc=4690,freq=4.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.63263696 = fieldWeight in 4690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4690)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Classification schemes, such as the DMoZ web directory, provide a convenient and intuitive way for humans to access classified contents. While being easy to be dealt with for humans, classification schemes remain hard to be reasoned about by automated software agents. Among other things, this hardness is conditioned by the ambiguous na- ture of the natural language used to describe classification categories. In this paper we describe how classification schemes can be converted into OWL ontologies, thus enabling reasoning on them by Semantic Web applications. The proposed solution is based on a two phase approach in which category names are first encoded in a concept language and then, together with the structure of the classification scheme, are converted into an OWL ontology. We demonstrate the practical applicability of our approach by showing how the results of reasoning on these OWL ontologies can help improve the organization and use of web directories.
  12. Kelley, D.: Relevance feedback : getting to know your user (2008) 0.03
    0.027559668 = product of:
      0.09645883 = sum of:
        0.074823216 = weight(_text_:interactions in 1924) [ClassicSimilarity], result of:
          0.074823216 = score(doc=1924,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.3258102 = fieldWeight in 1924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1924)
        0.021635616 = weight(_text_:with in 1924) [ClassicSimilarity], result of:
          0.021635616 = score(doc=1924,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 1924, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1924)
      0.2857143 = coord(2/7)
    
    Abstract
    Relevance feedback was one of the first interactive information retrieval techniques to help systems learn more about users' interests. Relevance feedback has been used in a variety of IR applications including query expansion, term disambiguation, user profiling, filtering and personalization. Initial relevance feedback techniques were explicit, in that they required the user's active participation. Many of today's relevance feedback techniques are implicit and based on users' information seeking behaviors, such as the pages they choose to visit, the frequency with which they visit pages, and the length of time pages are displayed. Although this type of information is available in great abundance, it is difficult to interpret without understanding more about the user's search goals and context. In this talk, I will address the following questions: what techniques are available to help us learn about users' interests and preferences? What types of evidence are available through a user's interactions with the system and with the information provided by the system? What do we need to know to accurately interpret and use this evidence? I will address the first two questions by presenting an overview of relevance feedback research in information retrieval. I will address the third question by presenting results of some of my own research that examined the online information seeking behaviors of users during a 14-week period and the context in which these behaviors took place.
  13. Lehmann, K.: Unser Gehirn kartiert auch Beziehungen räumlich (2015) 0.03
    0.02642532 = product of:
      0.09248862 = sum of:
        0.074823216 = weight(_text_:interactions in 2146) [ClassicSimilarity], result of:
          0.074823216 = score(doc=2146,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.3258102 = fieldWeight in 2146, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2146)
        0.017665405 = weight(_text_:with in 2146) [ClassicSimilarity], result of:
          0.017665405 = score(doc=2146,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 2146, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2146)
      0.2857143 = coord(2/7)
    
    Footnote
    Vgl. Original unter: http://www.sciencedirect.com/science/article/pii/S0896627315005243: "Morais Tavares, R., A. Mendelsohn, Y.Grossman, C.H. Williams, M. Shapiro, Y. Trope u. D. Schiller: A Map for Social Navigation in the Human Brain" in. Neuron 87(2015) no.1, S,231-243. [Deciphering the neural mechanisms of social behavior has propelled the growth of social neuroscience. The exact computations of the social brain, however, remain elusive. Here we investigated how the human br ain tracks ongoing changes in social relationships using functional neuroimaging. Participants were lead characters in a role-playing game in which they were to find a new home and a job through interactions with virtual cartoon characters. We found that a two-dimensional geometric model of social relationships, a "social space" framed by power and affiliation, predicted hippocampal activity. Moreover, participants who reported better social skills showed stronger covariance between hippocampal activity and "movement" through "social space." The results suggest that the hippocampus is crucial for social cognition, and imply that beyond framing physical locations, the hippocampus computes a more general, inclusive, abstract, and multidimensional cognitive map consistent with its role in episodic memory.].
  14. Cecchini, C.; Zanchetta, C.; Paolo Borin, P.; Xausa, G.: Computational design e sistemi di classificazione per la verifica predittiva delle prestazioni di sistema degli organismi edilizi : Computational design and classification systems to support predictive checking of performance of building systems (2017) 0.03
    0.02642532 = product of:
      0.09248862 = sum of:
        0.074823216 = weight(_text_:interactions in 5856) [ClassicSimilarity], result of:
          0.074823216 = score(doc=5856,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.3258102 = fieldWeight in 5856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5856)
        0.017665405 = weight(_text_:with in 5856) [ClassicSimilarity], result of:
          0.017665405 = score(doc=5856,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 5856, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5856)
      0.2857143 = coord(2/7)
    
    Abstract
    The aim of control the economic, social and environmental aspects connected to the construction of a building imposes a systematic approach for which t is necessary to make test models aimed to a coordinate analysis of different and independent performance issues. BIM technology, referring to interoperable informative models, offers a significant operative basis to achieve this necessity. In most of the cases, informative models concentrate on a product-based digital models collection built in a virtual space, more than on the simulation of their relational behaviors. This relation, instead, is the most important aspect of modelling because it marks and characterizes the interactions that can define the building as a system. This study presents the use of standard classification systems as tools for both the activation and validation of an integrated performance-based building process. By referring categories and types of the informative model to the codes of a technological and performance-based classification system, it is possible to link and coordinate functional units and their elements with the indications required by the AEC standards. In this way, progressing with an incremental logic, it is possible to achieve the management of the requirements of the whole building and the monitoring of the fulfilment of design objectives and specific normative guidelines.
  15. Petras, V.: ¬The identity of information science (2023) 0.03
    0.02642532 = product of:
      0.09248862 = sum of:
        0.074823216 = weight(_text_:interactions in 1077) [ClassicSimilarity], result of:
          0.074823216 = score(doc=1077,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.3258102 = fieldWeight in 1077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
        0.017665405 = weight(_text_:with in 1077) [ClassicSimilarity], result of:
          0.017665405 = score(doc=1077,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 1077, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1077)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose This paper offers a definition of the core of information science, which encompasses most research in the field. The definition provides a unique identity for information science and positions it in the disciplinary universe. Design/methodology/approach After motivating the objective, a definition of the core and an explanation of its key aspects are provided. The definition is related to other definitions of information science before controversial discourse aspects are briefly addressed: discipline vs. field, science vs. humanities, library vs. information science and application vs. theory. Interdisciplinarity as an often-assumed foundation of information science is challenged. Findings Information science is concerned with how information is manifested across space and time. Information is manifested to facilitate and support the representation, access, documentation and preservation of ideas, activities, or practices, and to enable different types of interactions. Research and professional practice encompass the infrastructures - institutions and technology -and phenomena and practices around manifested information across space and time as its core contribution to the scholarly landscape. Information science collaborates with other disciplines to work on complex information problems that need multi- and interdisciplinary approaches to address them. Originality/value The paper argues that new information problems may change the core of the field, but throughout its existence, the discipline has remained quite stable in its central focus, yet proved to be highly adaptive to the tremendous changes in the forms, practices, institutions and technologies around and for manifested information.
  16. Hofmann-Apitius, M.: Direct use of information extraction from scientific text for modeling and simulation in the life sciences (2009) 0.02
    0.024947014 = product of:
      0.087314546 = sum of:
        0.074823216 = weight(_text_:interactions in 2814) [ClassicSimilarity], result of:
          0.074823216 = score(doc=2814,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.3258102 = fieldWeight in 2814, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2814)
        0.012491328 = weight(_text_:with in 2814) [ClassicSimilarity], result of:
          0.012491328 = score(doc=2814,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.1331223 = fieldWeight in 2814, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2814)
      0.2857143 = coord(2/7)
    
    Abstract
    Scientific biomedical publications are a rich source of information about diseases and the molecules that play a role in the molecular etiology of a disease. With the development of automated methods for the identification of named biomedical entities in scientific text ("text mining") we are now able to automatically screen millions of publications for genes, their relationships to other genes, their role in the development of a disease and their role as potential targets for therapeutic cures. In fact, modern advanced search engines are now able to extract various terms in scientific text that represent entities which can be directly used for modeling of diseases and simulation of disease-relevant molecular networks. In my presentation, I will demonstrate how scientific text can be analyzed using a combination of algorithmic approaches (dictionary- and rule-based as well as machine learning - based methods). I will furthermore demonstrate, how scientific information extracted from text can be applied in disease modeling approaches that combine heterogeneous information types (protein-protein-interactions, allelic variants of genes, clinical phenotype information) extracted from scientific publications. I will furthermore show how the analysis of scientific text can be used to construct "knowledge descriptors" that allow a completely new way of predicting the activity of small pharmaceutical molecules. Taken together, the talk will hopefully provide a clue how far we really are from using text analytics for direct modeling and simulation in the life sciences.
  17. New applications of knowledge organization systems (2004) 0.02
    0.024587398 = product of:
      0.08605589 = sum of:
        0.017487857 = weight(_text_:with in 2343) [ClassicSimilarity], result of:
          0.017487857 = score(doc=2343,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.1863712 = fieldWeight in 2343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2343)
        0.068568036 = product of:
          0.13713607 = sum of:
            0.13713607 = weight(_text_:humans in 2343) [ClassicSimilarity], result of:
              0.13713607 = score(doc=2343,freq=2.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.52189887 = fieldWeight in 2343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2343)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Knowledge Organization Systems/Services (KOS), such as classifications, gazetteers, lexical databases, ontologies, taxonomies and thesauri, model the underlying semantic structure of a domain. They can support subject indexing and facilitate resource discovery and retrieval, whether by humans or by machines. New networked KOS services and applications are emerging and we are reaching the stage where we can prepare the work for future exploitation of common representations and protocols for distributed use. A number of technologies could be combined to yield new solutions. The papers published here are concerned with different types of KOS, discuss various standards issues and span the information lifecycle.
  18. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.02
    0.021542134 = product of:
      0.07539746 = sum of:
        0.019986123 = weight(_text_:with in 4449) [ClassicSimilarity], result of:
          0.019986123 = score(doc=4449,freq=8.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 4449, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.03125 = fieldNorm(doc=4449)
        0.05541134 = product of:
          0.11082268 = sum of:
            0.11082268 = weight(_text_:humans in 4449) [ClassicSimilarity], result of:
              0.11082268 = score(doc=4449,freq=4.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.42175797 = fieldWeight in 4449, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
  19. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.02
    0.021542134 = product of:
      0.07539746 = sum of:
        0.019986123 = weight(_text_:with in 872) [ClassicSimilarity], result of:
          0.019986123 = score(doc=872,freq=8.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 872, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.03125 = fieldNorm(doc=872)
        0.05541134 = product of:
          0.11082268 = sum of:
            0.11082268 = weight(_text_:humans in 872) [ClassicSimilarity], result of:
              0.11082268 = score(doc=872,freq=4.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.42175797 = fieldWeight in 872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  20. Sy, M.-F.; Ranwez, S.; Montmain, J.; Ragnault, A.; Crampes, M.; Ranwez, V.: User centered and ontology based information retrieval system for life sciences (2012) 0.02
    0.021140257 = product of:
      0.0739909 = sum of:
        0.059858575 = weight(_text_:interactions in 699) [ClassicSimilarity], result of:
          0.059858575 = score(doc=699,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.26064816 = fieldWeight in 699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
        0.014132325 = weight(_text_:with in 699) [ClassicSimilarity], result of:
          0.014132325 = score(doc=699,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15061069 = fieldWeight in 699, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.03125 = fieldNorm(doc=699)
      0.2857143 = coord(2/7)
    
    Abstract
    Background: Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results: This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions: The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.

Years

Languages

Types

  • a 376
  • s 16
  • r 15
  • i 13
  • m 8
  • n 8
  • x 8
  • p 5
  • b 4
  • More… Less…

Themes