Search (9 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × theme_ss:"Suchmaschinen"
  1. Hane, P.J.: Beyond keyword searching : Oingo and Simpli.com introduce meaning-based searching (2000) 0.01
    0.008234787 = product of:
      0.020586967 = sum of:
        0.009535614 = weight(_text_:a in 6301) [ClassicSimilarity], result of:
          0.009535614 = score(doc=6301,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 6301, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6301)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 6301) [ClassicSimilarity], result of:
              0.022102704 = score(doc=6301,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 6301, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6301)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information today. 17(2000) no.1, S.57
    Type
    a
  2. Nait-Baha, L.; Jackiewicz, A.; Djioua, B.; Laublet, P.: Query reformulation for information retrieval on the Web using the point of view methodology : preliminary results (2001) 0.01
    0.005948606 = product of:
      0.014871514 = sum of:
        0.008173384 = weight(_text_:a in 249) [ClassicSimilarity], result of:
          0.008173384 = score(doc=249,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 249, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=249)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 249) [ClassicSimilarity], result of:
              0.013396261 = score(doc=249,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 249, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The work we are presenting is devoted to the information collected on the WWW. By the term collected we mean the whole process of retrieving, extracting and presenting results to the user. This research is part of the RAP (Research, Analyze, Propose) project in which we propose to combine two methods: (i) query reformulation using linguistic markers according to a given point of view; and (ii) text semantic analysis by means of contextual exploration results (Descles, 1991). The general project architecture describing the interactions between the users, the RAP system and the WWW search engines is presented in Nait-Baha et al. (1998). We will focus this paper on showing how we use linguistic markers to reformulate the queries according to a given point of view
    Type
    a
  3. Radev, D.; Fan, W.; Qu, H.; Wu, H.; Grewal, A.: Probabilistic question answering on the Web (2005) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 3455) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=3455,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 3455, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3455)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 3455) [ClassicSimilarity], result of:
              0.009472587 = score(doc=3455,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 3455, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3455)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Web-based search engines such as Google and NorthernLight return documents that are relevant to a user query, not answers to user questions. We have developed an architecture that augments existing search engines so that they support natural language question answering. The process entails five steps: query modulation, document retrieval, passage extraction, phrase extraction, and answer ranking. In this article, we describe some probabilistic approaches to the last three of these stages. We show how our techniques apply to a number of existing search engines, and we also present results contrasting three different methods for question answering. Our algorithm, probabilistic phrase reranking (PPR), uses proximity and question type features and achieves a total reciprocal document rank of .20 an the TREC8 corpus. Our techniques have been implemented as a Web-accessible system, called NSIR.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.6, S.571-583
    Type
    a
  4. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.00
    0.004303226 = product of:
      0.010758064 = sum of:
        0.0068111527 = weight(_text_:a in 2688) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=2688,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 2688, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2688)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 2688) [ClassicSimilarity], result of:
              0.007893822 = score(doc=2688,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 2688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The widespread availability of the Internet and the variety of Internet-based applications have resulted in a significant increase in the amount of web pages. Determining the behaviors of search engine users has become a critical step in enhancing search engine performance. Search engine user behaviors can be determined by content-based or content-ignorant algorithms. Although many content-ignorant studies have been performed to automatically identify new topics, previous results have demonstrated that spelling errors can cause significant errors in topic shift estimates. In this study, we focused on minimizing the number of wrong estimates that were based on spelling errors. We developed a new hybrid algorithm combining character n-gram and neural network methodologies, and compared the experimental results with results from previous studies. For the FAST and Excite datasets, the proposed algorithm improved topic shift estimates by 6.987% and 2.639%, respectively. Moreover, we analyzed the performance of the character n-gram method in different aspects including the comparison with Levenshtein edit-distance method. The experimental results demonstrated that the character n-gram method outperformed to the Levensthein edit distance method in terms of topic identification.
    Source
    Information processing and management. 50(2014) no.6, S.821-856
    Type
    a
  5. Notess, G.R.: Up and coming search technologies (2000) 0.00
    0.0019071229 = product of:
      0.009535614 = sum of:
        0.009535614 = weight(_text_:a in 5467) [ClassicSimilarity], result of:
          0.009535614 = score(doc=5467,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 5467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=5467)
      0.2 = coord(1/5)
    
    Type
    a
  6. Feldman, S.: Find what I mean, not what I say : meaning-based search tools (2000) 0.00
    0.0013622305 = product of:
      0.0068111527 = sum of:
        0.0068111527 = weight(_text_:a in 4799) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=4799,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
      0.2 = coord(1/5)
    
    Type
    a
  7. Weiß, E.-M.: ChatGPT soll es richten : Microsoft baut KI in Suchmaschine Bing ein (2023) 0.00
    0.0011051352 = product of:
      0.005525676 = sum of:
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 866) [ClassicSimilarity], result of:
              0.011051352 = score(doc=866,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=866)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    ChatGPT, die künstliche Intelligenz der Stunde, ist von OpenAI entwickelt worden. Und OpenAI ist in der Vergangenheit nicht unerheblich von Microsoft unterstützt worden. Nun geht es ums Profitieren: Die KI soll in die Suchmaschine Bing eingebaut werden, was eine direkte Konkurrenz zu Googles Suchalgorithmen und Intelligenzen bedeutet. Bing war da bislang nicht sonderlich erfolgreich. Wie "The Information" mit Verweis auf zwei Insider berichtet, plant Microsoft, ChatGPT in seine Suchmaschine Bing einzubauen. Bereits im März könnte die neue, intelligente Suche verfügbar sein. Microsoft hatte zuvor auf der hauseigenen Messe Ignite zunächst die Integration des Bildgenerators DALL·E 2 in seine Suchmaschine angekündigt - ohne konkretes Startdatum jedoch. Fragt man ChatGPT selbst, bestätigt der Chatbot seine künftige Aufgabe noch nicht. Weiß aber um potentielle Vorteile.
  8. Sünkler, S.; Kerkmann, F.; Schultheiß, S.: Ok Google . the end of search as we know it : sprachgesteuerte Websuche im Test (2018) 0.00
    9.5356145E-4 = product of:
      0.004767807 = sum of:
        0.004767807 = weight(_text_:a in 5626) [ClassicSimilarity], result of:
          0.004767807 = score(doc=5626,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 5626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5626)
      0.2 = coord(1/5)
    
    Type
    a
  9. Winterschladen, S.; Gurevych, I.: ¬Die perfekte Suchmaschine : Forschungsgruppe entwickelt ein System, das artverwandte Begriffe finden soll (2006) 0.00
    4.7678073E-4 = product of:
      0.0023839036 = sum of:
        0.0023839036 = weight(_text_:a in 5912) [ClassicSimilarity], result of:
          0.0023839036 = score(doc=5912,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.044588212 = fieldWeight in 5912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5912)
      0.2 = coord(1/5)
    
    Type
    a