Search (1889 results, page 1 of 95)

  • × year_i:[2010 TO 2020}
  • × type_ss:"a"
  1. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.25
    0.2495329 = product of:
      0.6238322 = sum of:
        0.04544258 = product of:
          0.13632774 = sum of:
            0.13632774 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.13632774 = score(doc=484,freq=2.0), product of:
                0.24256827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028611459 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.19279654 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.19279654 = score(doc=484,freq=4.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.19279654 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.19279654 = score(doc=484,freq=4.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.19279654 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.19279654 = score(doc=484,freq=4.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.4 = coord(4/10)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
  2. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.23
    0.2288981 = product of:
      0.57224524 = sum of:
        0.18177032 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.18177032 = score(doc=2188,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.18177032 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.18177032 = score(doc=2188,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.026934259 = weight(_text_:web in 2188) [ClassicSimilarity], result of:
          0.026934259 = score(doc=2188,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.2884563 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.18177032 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.18177032 = score(doc=2188,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.4 = coord(4/10)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.18
    0.18177032 = product of:
      0.4544258 = sum of:
        0.04544258 = product of:
          0.13632774 = sum of:
            0.13632774 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.13632774 = score(doc=400,freq=2.0), product of:
                0.24256827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028611459 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.13632774 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.13632774 = score(doc=400,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.13632774 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.13632774 = score(doc=400,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.13632774 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.13632774 = score(doc=400,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.4 = coord(4/10)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  4. Kaden, B.; Kindling, M.: Kommunikation und Kontext : Überlegungen zur Entwicklung virtueller Diskursräume für die Wissenschaft (2010) 0.04
    0.03955717 = product of:
      0.13185723 = sum of:
        0.09449192 = weight(_text_:kommunikation in 4271) [ClassicSimilarity], result of:
          0.09449192 = score(doc=4271,freq=4.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.64251363 = fieldWeight in 4271, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0625 = fieldNorm(doc=4271)
        0.026934259 = weight(_text_:web in 4271) [ClassicSimilarity], result of:
          0.026934259 = score(doc=4271,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.2884563 = fieldWeight in 4271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4271)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 4271) [ClassicSimilarity], result of:
              0.031293165 = score(doc=4271,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 4271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4271)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    Der Beitrag behandelt die wissenschaftsinterne Kommunikation bzw. genauer die diskursive Verbreitung und Rezeption von wissenschaftskommunikativen Äußerungen in virtuellen Kommunikationsräumen, die mit semantischen Technologien gestützt werden. Ziel der Überlegungen ist die Darstellung von Anforderungen an und Möglichkeiten für die Gestaltung virtueller Diskurs-räume für Fach- und Wissenschaftsgemeinschaften.
    Date
    2. 2.2011 18:29:45
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  5. Mandl, T.; Schulz, J.M.; Marholz, N.; Werner, K.: Benutzerforschung anhand von Log-Dateien : Chancen Grenzen und aktuelle Trends (2011) 0.04
    0.03806568 = product of:
      0.19032839 = sum of:
        0.17989734 = weight(_text_:log in 4304) [ClassicSimilarity], result of:
          0.17989734 = score(doc=4304,freq=6.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.98111564 = fieldWeight in 4304, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=4304)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 4304) [ClassicSimilarity], result of:
              0.031293165 = score(doc=4304,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 4304, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4304)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Die Analyse des Verhaltens von Benutzern von Informationssystemen stellt einen Kern der Informationswissenschaft dar. Die Sammlung von umfangreichen Verhaltensdaten fällt mit den heutigen technischen Möglichkeiten leicht. Der Artikel fasst Möglichkeiten und Chancen der Analyse von Log-Dateien zusammen. Der Track LogCLEF wird vorgestellt, der Forschern erstmals die Möglichkeit eröffnet, mit den denselben Log-Dateien und somit vergleichend arbeiten zu können. Die Datengrundlage und einige Ergebnisse von LogCLEF werden vorgestellt.
    Source
    Information - Wissenschaft und Praxis. 62(2011) H.1, S.29-35
  6. Kruschwitz, U.; Lungley, D.; Albakour, M-D.; Song, D.: Deriving query suggestions for site search (2013) 0.04
    0.036563005 = product of:
      0.18281503 = sum of:
        0.023806747 = weight(_text_:web in 1085) [ClassicSimilarity], result of:
          0.023806747 = score(doc=1085,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 1085, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1085)
        0.15900828 = weight(_text_:log in 1085) [ClassicSimilarity], result of:
          0.15900828 = score(doc=1085,freq=12.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.86719185 = fieldWeight in 1085, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1085)
      0.2 = coord(2/10)
    
    Abstract
    Modern search engines have been moving away from simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features are now integral parts of web search engines. However, generating good query modification suggestions remains a challenging issue. Query log analysis is one of the major strands of work in this direction. Although much research has been performed on query logs collected on the web as a whole, query log analysis to enhance search on smaller and more focused collections has attracted less attention, despite its increasing practical importance. In this article, we report on a systematic study of different query modification methods applied to a substantial query log collected on a local website that already uses an interactive search engine. We conducted experiments in which we asked users to assess the relevance of potential query modification suggestions that have been constructed using a range of log analysis methods and different baseline approaches. The experimental results demonstrate the usefulness of log analysis to extract query modification suggestions. Furthermore, our experiments demonstrate that a more fine-grained approach than grouping search requests into sessions allows for extraction of better refinement terms from query log files.
  7. Falchi, F.; Lucchese, C.; Orlando, S.; Perego, R.; Rabitti, F.: Similarity caching in large-scale image retrieval (2012) 0.03
    0.034547042 = product of:
      0.1151568 = sum of:
        0.016833913 = weight(_text_:web in 2729) [ClassicSimilarity], result of:
          0.016833913 = score(doc=2729,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.18028519 = fieldWeight in 2729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2729)
        0.091803476 = weight(_text_:log in 2729) [ClassicSimilarity], result of:
          0.091803476 = score(doc=2729,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5006735 = fieldWeight in 2729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2729)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 2729) [ClassicSimilarity], result of:
              0.019558229 = score(doc=2729,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 2729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2729)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    Feature-rich data, such as audio-video recordings, digital images, and results of scientific experiments, nowadays constitute the largest fraction of the massive data sets produced daily in the e-society. Content-based similarity search systems working on such data collections are rapidly growing in importance. Unfortunately, similarity search is in general very expensive and hardly scalable. In this paper we study the case of content-based image retrieval (CBIR) systems, and focus on the problem of increasing the throughput of a large-scale CBIR system that indexes a very large collection of digital images. By analyzing the query log of a real CBIR system available on the Web, we characterize the behavior of users who experience a novel search paradigm, where content-based similarity queries and text-based ones can easily be interleaved. We show that locality and self-similarity is present even in the stream of queries submitted to such a CBIR system. According to these results, we propose an effective way to exploit this locality, by means of a similarity caching system, which stores the results of recently/frequently submitted queries and associated results. Unlike traditional caching, the proposed cache can manage not only exact hits, but also approximate ones that are solved by similarity with respect to the result sets of past queries present in the cache. We evaluate extensively the proposed solution by using the real query stream recorded in the log and a collection of 100 millions of digital photographs. The high hit ratios and small average approximation error figures obtained demonstrate the effectiveness of the approach.
    Date
    27. 1.2016 18:30:29
  8. Derek Doran, D.; Gokhale, S.S.: ¬A classification framework for web robots (2012) 0.03
    0.031546462 = product of:
      0.15773231 = sum of:
        0.053868517 = weight(_text_:web in 505) [ClassicSimilarity], result of:
          0.053868517 = score(doc=505,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5769126 = fieldWeight in 505, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
        0.10386378 = weight(_text_:log in 505) [ClassicSimilarity], result of:
          0.10386378 = score(doc=505,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.5664474 = fieldWeight in 505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
      0.2 = coord(2/10)
    
    Abstract
    The behavior of modern web robots varies widely when they crawl for different purposes. In this article, we present a framework to classify these web robots from two orthogonal perspectives, namely, their functionality and the types of resources they consume. Applying the classification framework to a year-long access log from the UConn SoE web server, we present trends that point to significant differences in their crawling behavior.
  9. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.03
    0.028572304 = product of:
      0.09524101 = sum of:
        0.023806747 = weight(_text_:web in 4048) [ClassicSimilarity], result of:
          0.023806747 = score(doc=4048,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 4048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.06491486 = weight(_text_:log in 4048) [ClassicSimilarity], result of:
          0.06491486 = score(doc=4048,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 4048) [ClassicSimilarity], result of:
              0.019558229 = score(doc=4048,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 4048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4048)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
    Date
    15. 1.2018 19:13:29
  10. Calvanese, D.; Kalayci, T.E.; Montali, M.; Santoso, A.: OBDA for log extraction in process mining (2017) 0.03
    0.028318608 = product of:
      0.14159304 = sum of:
        0.029157192 = weight(_text_:web in 3931) [ClassicSimilarity], result of:
          0.029157192 = score(doc=3931,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 3931, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3931)
        0.11243584 = weight(_text_:log in 3931) [ClassicSimilarity], result of:
          0.11243584 = score(doc=3931,freq=6.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.61319727 = fieldWeight in 3931, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3931)
      0.2 = coord(2/10)
    
    Abstract
    Process mining is an emerging area that synergically combines model-based and data-oriented analysis techniques to obtain useful insights on how business processes are executed within an organization. Through process mining, decision makers can discover process models from data, compare expected and actual behaviors, and enrich models with key information about their actual execution. To be applicable, process mining techniques require the input data to be explicitly structured in the form of an event log, which lists when and by whom different case objects (i.e., process instances) have been subject to the execution of tasks. Unfortunately, in many real world set-ups, such event logs are not explicitly given, but are instead implicitly represented in legacy information systems. To apply process mining in this widespread setting, there is a pressing need for techniques able to support various process stakeholders in data preparation and log extraction from legacy information systems. The purpose of this paper is to single out this challenging, open issue, and didactically introduce how techniques from intelligent data management, and in particular ontology-based data access, provide a viable solution with a solid theoretical basis.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
  11. Wan-Chik, R.; Clough, P.; Sanderson, M.: Investigating religious information searching through analysis of a search engine log (2013) 0.03
    0.026072973 = product of:
      0.13036487 = sum of:
        0.020200694 = weight(_text_:web in 1129) [ClassicSimilarity], result of:
          0.020200694 = score(doc=1129,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 1129, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1129)
        0.11016417 = weight(_text_:log in 1129) [ClassicSimilarity], result of:
          0.11016417 = score(doc=1129,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.60080814 = fieldWeight in 1129, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=1129)
      0.2 = coord(2/10)
    
    Abstract
    In this paper we present results from an investigation of religious information searching based on analyzing log files from a large general-purpose search engine. From approximately 15 million queries, we identified 124,422 that were part of 60,759 user sessions. We present a method for categorizing queries based on related terms and show differences in search patterns between religious searches and web searching more generally. We also investigate the search patterns found in queries related to 5 religions: Christianity, Hinduism, Islam, Buddhism, and Judaism. Different search patterns are found to emerge. Results from this study complement existing studies of religious information searching and provide a level of detailed analysis not reported to date. We show, for example, that sessions involving religion-related queries tend to last longer, that the lengths of religion-related queries are greater, and that the number of unique URLs clicked is higher when compared to all queries. The results of the study can serve to provide information on what this large population of users is actually searching for.
  12. Klauser, H.: Freiheit oder totale Kontrolle : das Internet und die Grundrechte (2012) 0.02
    0.0248285 = product of:
      0.1241425 = sum of:
        0.04175992 = weight(_text_:kommunikation in 338) [ClassicSimilarity], result of:
          0.04175992 = score(doc=338,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.28395358 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=338)
        0.08238258 = weight(_text_:schutz in 338) [ClassicSimilarity], result of:
          0.08238258 = score(doc=338,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.3988276 = fieldWeight in 338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0390625 = fieldNorm(doc=338)
      0.2 = coord(2/10)
    
    Abstract
    Zum 7. Mal wird Anfang November 2012 das Internet Governance Forum (IGF) stattfinden, das die Steuerung und Entwicklung des Internets auf globaler Ebene thematisiert. In diesem Jahr wird der "Weltgipfel des Internet" in Baku, Aserbaidschan, stattfinden und Vertreter aus Politik, Privatwirtschaft, internationalen Organisationen und der Zivilgesellschaft zusammenführen. Auch der internationale Bibliotheksverband IFLA wird wie in den Vorjahren wieder dabei sein, um die bedeutende Rolle von Bibliotheken in der modernen Informationsgesellschaft in die Diskussionen einzubringen. Resultierend aus den beiden Weltgipfeln zur Informationsgesellschaft (WSIS) 2003 in Genf und 2005 in Tunis, die erstmals Themen wie Information und Kommunikation und die globale Informationsgesellschaft diskutierten, entstand das Internet Governance Forum, das 2006 formell vom Generalsekretär der Vereinten Nationen ohne eigene Entscheidungsbefugnis einberufen wurde und dessen Aufgabe es ist, eine Vielzahl von Themen des Internets wie Urheberrechtsfragen, Überwindung der digitalen Spaltung, Schutz der Privatsphäre und Freiheit der Meinungsäußerung im Netz zu diskutieren. Das Thema für die Konferenz in Baku lautet "Internet Governance for Sustainable Human, Economic and Social Development". Verschiedene Länder und Regionen der Welt, so auch Europa und u.a. USA, Dänemark, Italien, Russland, Ukraine, Finnland, Schweden, Spanien und auch Deutschland haben regionale und nationale IGF-Initiativen gegründet, um die Diskussionen der Jahrestreffen auf nationaler oder regionaler Ebene vorzubereiten. Am 7. Mai 2012 kamen in Berlin rund 80 deutsche Vertreter aus Politik, der Zivilgesellschaft, aus Verbänden und der Wirtschaft zum 4. deutschen Internet Governance Forum in Berlin zusammen, um zu dem Thema "Das Verhältnis von Internet und den Grund- und Menschenrechten" die Stichpunkte aus deutscher Sicht für die Teilnahme in Baku zusammenzutragen.
  13. Fensel, A.: Towards semantic APIs for research data services (2017) 0.02
    0.023201976 = product of:
      0.116009876 = sum of:
        0.08268043 = weight(_text_:kommunikation in 4439) [ClassicSimilarity], result of:
          0.08268043 = score(doc=4439,freq=4.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.5621994 = fieldWeight in 4439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4439)
        0.033329446 = weight(_text_:web in 4439) [ClassicSimilarity], result of:
          0.033329446 = score(doc=4439,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35694647 = fieldWeight in 4439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4439)
      0.2 = coord(2/10)
    
    Abstract
    Die schnelle Entwicklung der Internet- und Web-Technologie verändert den Stand der Technik in der Kommunikation von Wissen oder Forschungsergebnissen. Insbesondere werden semantische Technologien, verknüpfte und offene Daten zu entscheidenden Faktoren für einen erfolgreichen und effizienten Forschungsfortschritt. Zuerst definiere ich den Research Data Service (RDS) und diskutiere typische aktuelle und mögliche zukünftige Nutzungsszenarien mit RDS. Darüber hinaus bespreche ich den Stand der Technik in den Bereichen semantische Dienstleistung und Datenanmerkung und API-Konstruktion sowie infrastrukturelle Lösungen, die für die RDS-Realisierung anwendbar sind. Zum Schluss werden noch innovative Methoden der Online-Verbreitung, Förderung und effizienten Kommunikation der Forschung diskutiert.
    Theme
    Semantic Web
  14. Metzendorf, M.-I.: ¬Ein Wiki als internes Wissensmanagementtool der Bibliothek : Vorbedingungen und Erfahrungen (2011) 0.02
    0.022748757 = product of:
      0.11374378 = sum of:
        0.08238258 = weight(_text_:schutz in 163) [ClassicSimilarity], result of:
          0.08238258 = score(doc=163,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.3988276 = fieldWeight in 163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0390625 = fieldNorm(doc=163)
        0.0313612 = product of:
          0.0470418 = sum of:
            0.027659511 = weight(_text_:29 in 163) [ClassicSimilarity], result of:
              0.027659511 = score(doc=163,freq=4.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2748193 = fieldWeight in 163, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=163)
            0.019382289 = weight(_text_:22 in 163) [ClassicSimilarity], result of:
              0.019382289 = score(doc=163,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=163)
          0.6666667 = coord(2/3)
      0.2 = coord(2/10)
    
    Abstract
    Das Szenario klingt fast unheimlich: Etwas unbekanntes, sehr wertvolles befindet sich gleichzeitig an vielen geheimen Orten in der Bibliothek. Es ist nicht gerade wählerisch, was den Aufenthaltsort angeht: manchmal verbirgt es sich in Papierdokumenten, oft sucht es Schutz in Datenbanken, dann zieht es sich in einzelne Dateien zurück, und schließlich gedeiht es heimlich in den Köpfen. Angesichts aller Versuche, es zu fassen, erscheint es geradezu flüchtig (oder gar auf der Flucht?). Die Rede ist von Wissen. Was genau charakterisiert aber diese Ressource, von der alle reden und deren Gebrauch in der heutigen Zeit als zentral propagiert wird? "Wissen ist die Fähigkeit zum effektiven Handeln". Die Fähigkeit zum effektiven Handeln setzt beim Handelnden oftmals die Kenntnis von bestimmten Informationen voraus. Kann er an diese schnell gelangen und gelingt es ihm, sie an das vorhandene Wissen anzuknüpfen, handelt er effektiv. Wissen kann verschiedene Aggregatzustände einnehmen: Explizit ist Wissen, wenn es dokumentiert, ausgesprochen oder konkretisiert wurde und somit nicht mehr an den Menschen gebunden ist (genau genommen wird das Wissen hier also wieder in "Information" zurück gewandelt). Hingegen ist stilles Wissen nicht dokumentiert, lässt sich aber prinzipiell beschreiben und dokumentieren. Implizites Wissen ist schließlich Erfahrungswissen, das sich häufig nur verbal oder bildlich beschreiben lässt und deshalb meist an den Menschen gebunden bleibt.
    Date
    29. 5.2012 13:58:08
    29. 5.2012 14:20:42
    Source
    ¬Die Kraft der digitalen Unordnung: 32. Arbeits- und Fortbildungstagung der ASpB e. V., Sektion 5 im Deutschen Bibliotheksverband, 22.-25. September 2009 in der Universität Karlsruhe. Hrsg: Jadwiga Warmbrunn u.a
  15. Torres, S.D.; Hiemstra, D.; Weber, I.; Serdyukov, P.: Query recommendation in the information domain of children (2014) 0.02
    0.021293186 = product of:
      0.10646593 = sum of:
        0.028568096 = weight(_text_:web in 1300) [ClassicSimilarity], result of:
          0.028568096 = score(doc=1300,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3059541 = fieldWeight in 1300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1300)
        0.07789783 = weight(_text_:log in 1300) [ClassicSimilarity], result of:
          0.07789783 = score(doc=1300,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 1300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=1300)
      0.2 = coord(2/10)
    
    Abstract
    Children represent an increasing group of web users. Some of the key problems that hamper their search experience is their limited vocabulary, their difficulty in using the right keywords, and the inappropriateness of their general-purpose query suggestions. In this work, we propose a method that uses tags from social media to suggest queries related to children's topics. Concretely, we propose a simple yet effective approach to bias a random walk defined on a bipartite graph of web resources and tags through keywords that are more commonly used to describe resources for children. We evaluate our method using a large query log sample of queries submitted by children. We show that our method outperforms by a large margin the query suggestions of modern search engines and state-of-the art query suggestions based on random walks. We improve further the quality of the ranking by combining the score of the random walk with topical and language modeling features to emphasize even more the child-related aspects of the query suggestions.
  16. Layfield, C.; Azzopardi, J,; Staff, C.: Experiments with document retrieval from small text collections using Latent Semantic Analysis or term similarity with query coordination and automatic relevance feedback (2017) 0.02
    0.021184364 = product of:
      0.07061455 = sum of:
        0.013467129 = weight(_text_:web in 3478) [ClassicSimilarity], result of:
          0.013467129 = score(doc=3478,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.14422815 = fieldWeight in 3478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3478)
        0.05193189 = weight(_text_:log in 3478) [ClassicSimilarity], result of:
          0.05193189 = score(doc=3478,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.2832237 = fieldWeight in 3478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.03125 = fieldNorm(doc=3478)
        0.0052155275 = product of:
          0.015646582 = sum of:
            0.015646582 = weight(_text_:29 in 3478) [ClassicSimilarity], result of:
              0.015646582 = score(doc=3478,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 3478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3478)
          0.33333334 = coord(1/3)
      0.3 = coord(3/10)
    
    Abstract
    One of the problems faced by users of databases containing textual documents is the difficulty in retrieving relevant results due to the diverse vocabulary used in queries and contained in relevant documents, especially when there are only a small number of relevant documents. This problem is known as the Vocabulary Gap. The PIKES team have constructed a small test collection of 331 articles extracted from a blog and a Gold Standard for 35 queries selected from the blog's search log so the results of different approaches to semantic search can be compared. So far, prior approaches include recognising Named Entities in documents and queries, and relations including temporal relations, and represent them as `semantic layers' in a retrieval system index. In this work, we take two different approaches that do not involve Named Entity Recognition. In the first approach, we process an unannotated version of the PIKES document collection using Latent Semantic Analysis and use a combination of query coordination and automatic relevance feedback with which we outperform prior work. However, this approach is highly dependent on the underlying collection, and is not necessarily scalable to massive collections. In our second approach, we use an LSA Model generated by SEMILAR from a Wikipedia dump to generate a Term Similarity Matrix (TSM). We automatically expand the queries in the PIKES test collection with related terms from the TSM and submit them to a term-by-document matrix derived by indexing the PIKES collection using the Vector Space Model. Coupled with a combination of query coordination and automatic relevance feedback we also outperform prior work with this approach. The advantage of the second approach is that it is independent of the underlying document collection.
    Date
    10. 3.2017 13:29:57
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; 10151
  17. Wu, D.; Liang, S.; Dong, J.; Qiu, J.: Impact of task types on collaborative information seeking behavior (2013) 0.02
    0.019619705 = product of:
      0.098098524 = sum of:
        0.020200694 = weight(_text_:web in 5064) [ClassicSimilarity], result of:
          0.020200694 = score(doc=5064,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 5064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5064)
        0.07789783 = weight(_text_:log in 5064) [ClassicSimilarity], result of:
          0.07789783 = score(doc=5064,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 5064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=5064)
      0.2 = coord(2/10)
    
    Abstract
    This study examined the task type as an important factor in collaborative information seeking activities, devoting special attention to its impacts on collaborative information seeking behavior, awareness and sentiment. Collaborative information search experiments were conducted on a collaborative search system-Coagmento-for three different types of task (informational, transactional and navigational). System log, surveys and semi-structured interviews were used to collect data, with quantitative and qualitative analyses carried out on the data which related to 12 participants in four groups. Quantitative analysis employed SPSS 20, while qualitative analysis was carried out using ATLAS.ti. Through our research, we found that the task types have impact on users' collaborative information seeking behavior in terms of web page browsing, search and image using, as well as interact with task awareness. A collaborative team approach is more suitable for completing the informational task than transactional and navigational tasks, while the task type also influences the sentiment. Concretely speaking, the transactional task causes more negative emotions.
  18. Lewandowski, D.; Drechsler, J.; Mach, S. von: Deriving query intents from web search engine queries (2012) 0.02
    0.017744321 = product of:
      0.0887216 = sum of:
        0.023806747 = weight(_text_:web in 385) [ClassicSimilarity], result of:
          0.023806747 = score(doc=385,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 385, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=385)
        0.06491486 = weight(_text_:log in 385) [ClassicSimilarity], result of:
          0.06491486 = score(doc=385,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=385)
      0.2 = coord(2/10)
    
    Abstract
    The purpose of this article is to test the reliability of query intents derived from queries, either by the user who entered the query or by another juror. We report the findings of three studies. First, we conducted a large-scale classification study (~50,000 queries) using a crowdsourcing approach. Next, we used clickthrough data from a search engine log and validated the judgments given by the jurors from the crowdsourcing study. Finally, we conducted an online survey on a commercial search engine's portal. Because we used the same queries for all three studies, we also were able to compare the results and the effectiveness of the different approaches. We found that neither the crowdsourcing approach, using jurors who classified queries originating from other users, nor the questionnaire approach, using searchers who were asked about their own query that they just entered into a Web search engine, led to satisfying results. This leads us to conclude that there was little understanding of the classification tasks, even though both groups of jurors were given detailed instructions. Although we used manual classification, our research also has important implications for automatic classification. We must question the success of approaches using automatic classification and comparing its performance to a baseline from human jurors.
  19. Sarigil, E.; Sengor Altingovde, I.; Blanco, R.; Barla Cambazoglu, B.; Ozcan, R.; Ulusoy, Ö.: Characterizing, predicting, and handling web search queries that match very few or no results (2018) 0.02
    0.017744321 = product of:
      0.0887216 = sum of:
        0.023806747 = weight(_text_:web in 4039) [ClassicSimilarity], result of:
          0.023806747 = score(doc=4039,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 4039, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4039)
        0.06491486 = weight(_text_:log in 4039) [ClassicSimilarity], result of:
          0.06491486 = score(doc=4039,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 4039, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4039)
      0.2 = coord(2/10)
    
    Abstract
    A non-negligible fraction of user queries end up with very few or even no matching results in leading commercial web search engines. In this work, we provide a detailed characterization of such queries and show that search engines try to improve such queries by showing the results of related queries. Through a user study, we show that these query suggestions are usually perceived as relevant. Also, through a query log analysis, we show that the users are dissatisfied after submitting a query that match no results at least 88.5% of the time. As a first step towards solving these no-answer queries, we devised a large number of features that can be used to identify such queries and built machine-learning models. These models can be useful for scenarios such as the mobile- or meta-search, where identifying a query that will retrieve no results at the client device (i.e., even before submitting it to the search engine) may yield gains in terms of the bandwidth usage, power consumption, and/or monetary costs. Experiments over query logs indicate that, despite the heavy skew in class sizes, our models achieve good prediction quality, with accuracy (in terms of area under the curve) up to 0.95.
  20. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.02
    0.01732253 = product of:
      0.08661264 = sum of:
        0.07618159 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.07618159 = score(doc=54,freq=16.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 54) [ClassicSimilarity], result of:
              0.031293165 = score(doc=54,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 54, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=54)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Date
    10.12.2020 9:29:12
    Theme
    Semantic Web

Languages

  • e 1490
  • d 390
  • f 2
  • i 2
  • a 1
  • sp 1
  • More… Less…

Types

  • el 145
  • b 4
  • s 1
  • x 1
  • More… Less…

Themes

Classifications