Search (2056 results, page 1 of 103)

  • × year_i:[2010 TO 2020}
  1. Web search engine research (2012) 0.23
    0.2279081 = product of:
      0.30387747 = sum of:
        0.08550187 = weight(_text_:web in 478) [ClassicSimilarity], result of:
          0.08550187 = score(doc=478,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5299281 = fieldWeight in 478, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=478)
        0.13715115 = weight(_text_:search in 478) [ClassicSimilarity], result of:
          0.13715115 = score(doc=478,freq=24.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.79815334 = fieldWeight in 478, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=478)
        0.08122443 = product of:
          0.16244885 = sum of:
            0.16244885 = weight(_text_:engine in 478) [ClassicSimilarity], result of:
              0.16244885 = score(doc=478,freq=6.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.6142285 = fieldWeight in 478, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=478)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    "Web Search Engine Research", edited by Dirk Lewandowski, provides an understanding of Web search engines from the unique perspective of Library and Information Science. The book explores a range of topics including retrieval effectiveness, user satisfaction, the evaluation of search interfaces, the impact of search on society, reliability of search results, query log analysis, user guidance in the search process, and the influence of search engine optimization (SEO) on results quality. While research in computer science has mainly focused on technical aspects of search engines, LIS research is centred on users' behaviour when using search engines and how this interaction can be evaluated. LIS research provides a unique perspective in intermediating between the technical aspects, user aspects and their impact on their role in knowledge acquisition. This book is directly relevant to researchers and practitioners in library and information science, computer science, including Web researchers.
    LCSH
    Web search engines
    Subject
    Web search engines
  2. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.22
    0.21735626 = product of:
      0.28980836 = sum of:
        0.100764915 = weight(_text_:web in 438) [ClassicSimilarity], result of:
          0.100764915 = score(doc=438,freq=24.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6245262 = fieldWeight in 438, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
        0.093319535 = weight(_text_:search in 438) [ClassicSimilarity], result of:
          0.093319535 = score(doc=438,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.54307455 = fieldWeight in 438, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=438)
        0.095723905 = product of:
          0.19144781 = sum of:
            0.19144781 = weight(_text_:engine in 438) [ClassicSimilarity], result of:
              0.19144781 = score(doc=438,freq=12.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.72387516 = fieldWeight in 438, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=438)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this paper, we discuss the architecture and implementation of the Semantic Web Search Engine (SWSE). Following traditional search engine architecture, SWSE consists of crawling, data enhancing, indexing and a user interface for search, browsing and retrieval of information; unlike traditional search engines, SWSE operates over RDF Web data - loosely also known as Linked Data - which implies unique challenges for the system design, architecture, algorithms, implementation and user interface. In particular, many challenges exist in adopting Semantic Web technologies for Web data: the unique challenges of the Web - in terms of scale, unreliability, inconsistency and noise - are largely overlooked by the current Semantic Web standards. Herein, we describe the current SWSE system, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component. In so doing, we also give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data. Throughout, we offer evaluation and complementary argumentation to support our design choices, and also offer discussion on future directions and open research questions. Later, we also provide candid discussion relating to the difficulties currently faced in bringing such a search engine into the mainstream, and lessons learnt from roughly six years working on the Semantic Web Search Engine project.
    Object
    Semantic Web Search Engine
    Theme
    Semantic Web
  3. Berri, J.; Benlamri, R.: Context-aware mobile search engine (2012) 0.21
    0.20744473 = product of:
      0.27659297 = sum of:
        0.07805218 = weight(_text_:web in 104) [ClassicSimilarity], result of:
          0.07805218 = score(doc=104,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.48375595 = fieldWeight in 104, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=104)
        0.104750924 = weight(_text_:search in 104) [ClassicSimilarity], result of:
          0.104750924 = score(doc=104,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6095997 = fieldWeight in 104, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=104)
        0.09378988 = product of:
          0.18757977 = sum of:
            0.18757977 = weight(_text_:engine in 104) [ClassicSimilarity], result of:
              0.18757977 = score(doc=104,freq=8.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.7092499 = fieldWeight in 104, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=104)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Exploiting context information in a web search engine helps fine-tuning web services and applications to deliver custom-made information to end users. While context, including user and environment information, cannot be exploited efficiently in the wired Internet interaction type, it is becoming accessible with the mobile web where users have an intimate relationship with their handsets. In this type of interaction, context plays a significant role enhancing information search and therefore, allowing a search engine to detect relevant content in all digital forms and formats. This chapter proposes a context model and an architecture that promote integration of context information for individuals and social communities to add value to their interaction with the mobile web. The architecture relies on efficient knowledge management of multimedia resources for a wide range of applications and web services. The research is illustrated with a corporate case study showing how efficient context integration improves usability of a mobile search engine.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64433.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  4. Hoeber, O.: Human-centred Web search (2012) 0.21
    0.20717825 = product of:
      0.27623767 = sum of:
        0.06981198 = weight(_text_:web in 102) [ClassicSimilarity], result of:
          0.06981198 = score(doc=102,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 102, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=102)
        0.12520128 = weight(_text_:search in 102) [ClassicSimilarity], result of:
          0.12520128 = score(doc=102,freq=20.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.72861093 = fieldWeight in 102, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=102)
        0.08122443 = product of:
          0.16244885 = sum of:
            0.16244885 = weight(_text_:engine in 102) [ClassicSimilarity], result of:
              0.16244885 = score(doc=102,freq=6.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.6142285 = fieldWeight in 102, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=102)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    People commonly experience difficulties when searching the Web, arising from an incomplete knowledge regarding their information needs, an inability to formulate accurate queries, and a low tolerance for considering the relevance of the search results. While simple and easy to use interfaces have made Web search universally accessible, they provide little assistance for people to overcome the difficulties they experience when their information needs are more complex than simple fact-verification. In human-centred Web search, the purpose of the search engine expands from a simple information retrieval engine to a decision support system. People are empowered to take an active role in the search process, with the search engine supporting them in developing a deeper understanding of their information needs, assisting them in crafting and refining their queries, and aiding them in evaluating and exploring the search results. In this chapter, recent research in this domain is outlined and discussed.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64427.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  5. Oreskovic, A.: Google introduces new 'Hummingbird' search algorithm (2013) 0.20
    0.20123148 = product of:
      0.26830864 = sum of:
        0.05817665 = weight(_text_:web in 2517) [ClassicSimilarity], result of:
          0.05817665 = score(doc=2517,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 2517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2517)
        0.13197374 = weight(_text_:search in 2517) [ClassicSimilarity], result of:
          0.13197374 = score(doc=2517,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.7680234 = fieldWeight in 2517, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.078125 = fieldNorm(doc=2517)
        0.07815824 = product of:
          0.15631647 = sum of:
            0.15631647 = weight(_text_:engine in 2517) [ClassicSimilarity], result of:
              0.15631647 = score(doc=2517,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.59104156 = fieldWeight in 2517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2517)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Google Inc has overhauled its search algorithm, the foundation of the Internet's dominant search engine, to better cope with the longer, more complex queries it has been getting from Web users.
    Source
    http://www.reuters.com/article/net-us-google-search-idUSBRE98P11O20131002
  6. Sachse, J.: ¬The influence of snippet length on user behavior in mobile web search (2019) 0.20
    0.19575962 = product of:
      0.26101282 = sum of:
        0.050382458 = weight(_text_:web in 5493) [ClassicSimilarity], result of:
          0.050382458 = score(doc=5493,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 5493, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5493)
        0.09898031 = weight(_text_:search in 5493) [ClassicSimilarity], result of:
          0.09898031 = score(doc=5493,freq=18.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.5760175 = fieldWeight in 5493, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5493)
        0.11165006 = sum of:
          0.07815824 = weight(_text_:engine in 5493) [ClassicSimilarity], result of:
            0.07815824 = score(doc=5493,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.29552078 = fieldWeight in 5493, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5493)
          0.03349182 = weight(_text_:22 in 5493) [ClassicSimilarity], result of:
            0.03349182 = score(doc=5493,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.19345059 = fieldWeight in 5493, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5493)
      0.75 = coord(3/4)
    
    Abstract
    Purpose Web search is more and more moving into mobile contexts. However, screen size of mobile devices is limited and search engine result pages face a trade-off between offering informative snippets and optimal use of space. One factor clearly influencing this trade-off is snippet length. The purpose of this paper is to find out what snippet size to use in mobile web search. Design/methodology/approach For this purpose, an eye-tracking experiment was conducted showing participants search interfaces with snippets of one, three or five lines on a mobile device to analyze 17 dependent variables. In total, 31 participants took part in the study. Each of the participants solved informational and navigational tasks. Findings Results indicate a strong influence of page fold on scrolling behavior and attention distribution across search results. Regardless of query type, short snippets seem to provide too little information about the result, so that search performance and subjective measures are negatively affected. Long snippets of five lines lead to better performance than medium snippets for navigational queries, but to worse performance for informational queries. Originality/value Although space in mobile search is limited, this study shows that longer snippets improve usability and user experience. It further emphasizes that page fold plays a stronger role in mobile than in desktop search for attention distribution.
    Date
    20. 1.2015 18:30:22
  7. Petric, K.; Petric, T.; Krisper, M.; Rajkovic, V.: User profiling on a pilot digital library with the final result of a new adaptive knowledge management solution (2011) 0.18
    0.18253812 = product of:
      0.24338417 = sum of:
        0.06981198 = weight(_text_:web in 4560) [ClassicSimilarity], result of:
          0.06981198 = score(doc=4560,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 4560, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4560)
        0.03959212 = weight(_text_:search in 4560) [ClassicSimilarity], result of:
          0.03959212 = score(doc=4560,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 4560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4560)
        0.13398007 = sum of:
          0.09378988 = weight(_text_:engine in 4560) [ClassicSimilarity], result of:
            0.09378988 = score(doc=4560,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.35462496 = fieldWeight in 4560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.046875 = fieldNorm(doc=4560)
          0.04019018 = weight(_text_:22 in 4560) [ClassicSimilarity], result of:
            0.04019018 = score(doc=4560,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.23214069 = fieldWeight in 4560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4560)
      0.75 = coord(3/4)
    
    Abstract
    In this article, several procedures (e.g., measurements, information retrieval analyses, power law, association rules, hierarchical clustering) are introduced which were made on a pilot digital library. Information retrievals of web users from 01/01/2003 to 01/01/2006 on the internal search engine of the pilot digital library have been analyzed. With the power law method of data processing, a constant information retrieval pattern has been established, stable over a longer period of time. After this, the data have been analyzed. On the basis of the accomplished measurements and analyses, a series of mental models of web users for global (educational) purposes have been developed (e.g., the metamodel of thought hierarchy of web users, the segmentation model of web users), and the users were profiled in four different groups (adventurers, observers, applicable, and know-alls). The article concludes with the construction of a new knowledge management solution called multidimensional rank thesaurus.
    Date
    13. 7.2011 14:47:22
  8. Unkel, J.; Haas, A.: ¬The effects of credibility cues on the selection of search engine results (2017) 0.18
    0.1756793 = product of:
      0.23423907 = sum of:
        0.029088326 = weight(_text_:web in 3752) [ClassicSimilarity], result of:
          0.029088326 = score(doc=3752,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 3752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
        0.10942685 = weight(_text_:search in 3752) [ClassicSimilarity], result of:
          0.10942685 = score(doc=3752,freq=22.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6368113 = fieldWeight in 3752, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
        0.095723905 = product of:
          0.19144781 = sum of:
            0.19144781 = weight(_text_:engine in 3752) [ClassicSimilarity], result of:
              0.19144781 = score(doc=3752,freq=12.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.72387516 = fieldWeight in 3752, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3752)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Web search engines act as gatekeepers when people search for information online. Research has shown that search engine users seem to trust the search engines' ranking uncritically and mostly select top-ranked results. This study further examines search engine users' selection behavior. Drawing from the credibility and information research literature, we test whether the presence or absence of certain credibility cues influences the selection probability of search engine results. In an observational study, participants (N?=?247) completed two information research tasks on preset search engine results pages, on which three credibility cues (source reputation, message neutrality, and social recommendations) as well as the search result ranking were systematically varied. The results of our study confirm the significance of the ranking. Of the three credibility cues, only reputation had an additional effect on selection probabilities. Personal characteristics (prior knowledge about the researched issues, search engine usage patterns, etc.) did not influence the preference for search results linked with certain credibility cues. These findings are discussed in light of situational and contextual characteristics (e.g., involvement, low-cost scenarios).
  9. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.17
    0.17355756 = product of:
      0.23141009 = sum of:
        0.08726498 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.08726498 = score(doc=1210,freq=18.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5408555 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.06598687 = weight(_text_:search in 1210) [ClassicSimilarity], result of:
          0.06598687 = score(doc=1210,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 1210, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.07815824 = product of:
          0.15631647 = sum of:
            0.15631647 = weight(_text_:engine in 1210) [ClassicSimilarity], result of:
              0.15631647 = score(doc=1210,freq=8.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.59104156 = fieldWeight in 1210, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
    Theme
    Semantic Web
  10. Vaughan, L.; Romero-Frías, E.: Web search volume as a predictor of academic fame : an exploration of Google trends (2014) 0.17
    0.16609338 = product of:
      0.22145784 = sum of:
        0.06981198 = weight(_text_:web in 1233) [ClassicSimilarity], result of:
          0.06981198 = score(doc=1233,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 1233, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1233)
        0.104750924 = weight(_text_:search in 1233) [ClassicSimilarity], result of:
          0.104750924 = score(doc=1233,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6095997 = fieldWeight in 1233, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1233)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 1233) [ClassicSimilarity], result of:
              0.09378988 = score(doc=1233,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 1233, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1233)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Searches conducted on web search engines reflect the interests of users and society. Google Trends, which provides information about the queries searched by users of the Google web search engine, is a rich data source from which a wealth of information can be mined. We investigated the possibility of using web search volume data from Google Trends to predict academic fame. As queries are language-dependent, we studied universities from two countries with different languages, the United States and Spain. We found a significant correlation between the search volume of a university name and the university's academic reputation or fame. We also examined the effect of some Google Trends features, namely, limiting the search to a specific country or topic category on the search volume data. Finally, we examined the effect of university sizes on the correlations found to gain a deeper understanding of the nature of the relationships.
  11. Balakrishnan, V.; Ahmadi, K.; Ravana, S.D.: Improving retrieval relevance using users' explicit feedback (2016) 0.16
    0.16088547 = product of:
      0.21451396 = sum of:
        0.029088326 = weight(_text_:web in 2921) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2921,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2921)
        0.07377557 = weight(_text_:search in 2921) [ClassicSimilarity], result of:
          0.07377557 = score(doc=2921,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 2921, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2921)
        0.11165006 = sum of:
          0.07815824 = weight(_text_:engine in 2921) [ClassicSimilarity], result of:
            0.07815824 = score(doc=2921,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.29552078 = fieldWeight in 2921, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2921)
          0.03349182 = weight(_text_:22 in 2921) [ClassicSimilarity], result of:
            0.03349182 = score(doc=2921,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.19345059 = fieldWeight in 2921, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2921)
      0.75 = coord(3/4)
    
    Abstract
    Purpose The purpose of this paper is to improve users' search results relevancy by manipulating their explicit feedback. Design/methodology/approach CoRRe - an explicit feedback model integrating three popular feedback, namely, Comment-Rating-Referral is proposed in this study. The model is further enhanced using case-based reasoning in retrieving the top-5 results. A search engine prototype was developed using Text REtrieval Conference as the document collection, and results were evaluated at three levels (i.e. top-5, 10 and 15). A user evaluation involving 28 students was administered, focussing on 20 queries. Findings Both Mean Average Precision and Normalized Discounted Cumulative Gain results indicate CoRRe to have the highest retrieval precisions at all the three levels compared to the other feedback models. Furthermore, independent t-tests showed the precision differences to be significant. Rating was found to be the most popular technique among the participants, producing the best precision compared to referral and comments. Research limitations/implications The findings suggest that search retrieval relevance can be significantly improved when users' explicit feedback are integrated, therefore web-based systems should find ways to manipulate users' feedback to provide better recommendations or search results to the users. Originality/value The study is novel in the sense that users' comment, rating and referral were taken into consideration to improve their overall search experience.
    Date
    20. 1.2015 18:30:22
  12. Yilmaz, T.; Ozcan, R.; Altingovde, I.S.; Ulusoy, Ö.: Improving educational web search for question-like queries through subject classification (2019) 0.16
    0.15854177 = product of:
      0.21138902 = sum of:
        0.050382458 = weight(_text_:web in 5041) [ClassicSimilarity], result of:
          0.050382458 = score(doc=5041,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 5041, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5041)
        0.093319535 = weight(_text_:search in 5041) [ClassicSimilarity], result of:
          0.093319535 = score(doc=5041,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.54307455 = fieldWeight in 5041, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5041)
        0.06768702 = product of:
          0.13537404 = sum of:
            0.13537404 = weight(_text_:engine in 5041) [ClassicSimilarity], result of:
              0.13537404 = score(doc=5041,freq=6.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.51185703 = fieldWeight in 5041, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5041)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Students use general web search engines as their primary source of research while trying to find answers to school-related questions. Although search engines are highly relevant for the general population, they may return results that are out of educational context. Another rising trend; social community question answering websites are the second choice for students who try to get answers from other peers online. We attempt discovering possible improvements in educational search by leveraging both of these information sources. For this purpose, we first implement a classifier for educational questions. This classifier is built by an ensemble method that employs several regular learning algorithms and retrieval based approaches that utilize external resources. We also build a query expander to facilitate classification. We further improve the classification using search engine results and obtain 83.5% accuracy. Although our work is entirely based on the Turkish language, the features could easily be mapped to other languages as well. In order to find out whether search engine ranking can be improved in the education domain using the classification model, we collect and label a set of query results retrieved from a general web search engine. We propose five ad-hoc methods to improve search ranking based on the idea that the query-document category relation is an indicator of relevance. We evaluate these methods for overall performance, varying query length and based on factoid and non-factoid queries. We show that some of the methods significantly improve the rankings in the education domain.
  13. Willson, R.; Given, L.M.: ¬The effect of spelling and retrieval system familiarity on search behavior in online public access catalogs : a mixed methods study (2010) 0.16
    0.15599944 = product of:
      0.20799924 = sum of:
        0.041137107 = weight(_text_:web in 4042) [ClassicSimilarity], result of:
          0.041137107 = score(doc=4042,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 4042, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4042)
        0.12778303 = weight(_text_:search in 4042) [ClassicSimilarity], result of:
          0.12778303 = score(doc=4042,freq=30.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.7436354 = fieldWeight in 4042, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4042)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 4042) [ClassicSimilarity], result of:
              0.07815824 = score(doc=4042,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 4042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4042)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Although technology can often correct spelling errors, the complex tasks of information searching and retrieval in an online public access catalog (OPAC) are made more difficult by these errors in users' input and bibliographic records. This study examines the search behaviors of 38 university students, divided into groups with either easy-to-spell or difficult-to-spell search terms, who were asked to find items in the OPAC with these search terms. Search behaviors and strategy use in the OPAC and on the World Wide Web (WWW) were examined. In general, students used familiar Web resources to check their spelling or discover more about the assigned topic. Students with difficult-to-spell search terms checked spelling more often, changed search strategies to look for the general topic and had fewer successful searches. Students unable to find the correct spelling of a search term were unable to complete their search. Students tended to search the OPAC as they would search a search engine, with few search terms or complex search strategies. The results of this study have implications for spell checking, user-focused OPAC design, and cataloging. Students' search behaviors are discussed by expanding Thatcher's (2006) Information-Seeking Process and Tactics for the WWW model to include OPACs.
  14. Spink, A.; Danby, S.; Mallan, K.; Butler, C.: Exploring young children's web searching and technoliteracy (2010) 0.15
    0.15437317 = product of:
      0.2058309 = sum of:
        0.100764915 = weight(_text_:web in 3623) [ClassicSimilarity], result of:
          0.100764915 = score(doc=3623,freq=24.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6245262 = fieldWeight in 3623, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3623)
        0.06598687 = weight(_text_:search in 3623) [ClassicSimilarity], result of:
          0.06598687 = score(doc=3623,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 3623, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3623)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 3623) [ClassicSimilarity], result of:
              0.07815824 = score(doc=3623,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 3623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3623)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - This paper aims to report findings from an exploratory study investigating the web interactions and technoliteracy of children in the early childhood years. Previous research has studied aspects of older children's technoliteracy and web searching; however, few studies have analyzed web search data from children younger than six years of age. Design/methodology/approach - The study explored the Google web searching and technoliteracy of young children who are enrolled in a "preparatory classroom" or kindergarten (the year before young children begin compulsory schooling in Queensland, Australia). Young children were video- and audio-taped while conducting Google web searches in the classroom. The data were qualitatively analysed to understand the young children's web search behaviour. Findings - The findings show that young children engage in complex web searches, including keyword searching and browsing, query formulation and reformulation, relevance judgments, successive searches, information multitasking and collaborative behaviours. The study results provide significant initial insights into young children's web searching and technoliteracy. Practical implications - The use of web search engines by young children is an important research area with implications for educators and web technologies developers. Originality/value - This is the first study of young children's interaction with a web search engine.
  15. Lewandowski, D.: Evaluating the retrieval effectiveness of web search engines using a representative query sample (2015) 0.15
    0.15349582 = product of:
      0.2046611 = sum of:
        0.03490599 = weight(_text_:web in 2157) [ClassicSimilarity], result of:
          0.03490599 = score(doc=2157,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 2157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
        0.08853068 = weight(_text_:search in 2157) [ClassicSimilarity], result of:
          0.08853068 = score(doc=2157,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.51520574 = fieldWeight in 2157, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
        0.08122443 = product of:
          0.16244885 = sum of:
            0.16244885 = weight(_text_:engine in 2157) [ClassicSimilarity], result of:
              0.16244885 = score(doc=2157,freq=6.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.6142285 = fieldWeight in 2157, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2157)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Search engine retrieval effectiveness studies are usually small scale, using only limited query samples. Furthermore, queries are selected by the researchers. We address these issues by taking a random representative sample of 1,000 informational and 1,000 navigational queries from a major German search engine and comparing Google's and Bing's results based on this sample. Jurors were found through crowdsourcing, and data were collected using specialized software, the Relevance Assessment Tool (RAT). We found that although Google outperforms Bing in both query types, the difference in the performance for informational queries was rather low. However, for navigational queries, Google found the correct answer in 95.3% of cases, whereas Bing only found the correct answer 76.6% of the time. We conclude that search engine performance on navigational queries is of great importance, because users in this case can clearly identify queries that have returned correct results. So, performance on this query type may contribute to explaining user satisfaction with search engines.
  16. Thelwall, M.: Assessing web search engines : a webometric approach (2011) 0.15
    0.15316099 = product of:
      0.20421466 = sum of:
        0.049364526 = weight(_text_:web in 10) [ClassicSimilarity], result of:
          0.049364526 = score(doc=10,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 10, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=10)
        0.08853068 = weight(_text_:search in 10) [ClassicSimilarity], result of:
          0.08853068 = score(doc=10,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.51520574 = fieldWeight in 10, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=10)
        0.06631946 = product of:
          0.13263892 = sum of:
            0.13263892 = weight(_text_:engine in 10) [ClassicSimilarity], result of:
              0.13263892 = score(doc=10,freq=4.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.5015154 = fieldWeight in 10, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=10)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Information Retrieval (IR) research typically evaluates search systems in terms of the standard precision, recall and F-measures to weight the relative importance of precision and recall (e.g. van Rijsbergen, 1979). All of these assess the extent to which the system returns good matches for a query. In contrast, webometric measures are designed specifically for web search engines and are designed to monitor changes in results over time and various aspects of the internal logic of the way in which search engine select the results to be returned. This chapter introduces a range of webometric measurements and illustrates them with case studies of Google, Bing and Yahoo! This is a very fertile area for simple and complex new investigations into search engine results.
  17. Harth, A.; Hogan, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing linked data with SWSE* (2012) 0.15
    0.15273765 = product of:
      0.2036502 = sum of:
        0.07125156 = weight(_text_:web in 410) [ClassicSimilarity], result of:
          0.07125156 = score(doc=410,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4416067 = fieldWeight in 410, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=410)
        0.093319535 = weight(_text_:search in 410) [ClassicSimilarity], result of:
          0.093319535 = score(doc=410,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.54307455 = fieldWeight in 410, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=410)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 410) [ClassicSimilarity], result of:
              0.07815824 = score(doc=410,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 410, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=410)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Web search engines such as Google, Yahoo! MSN/Bing, and Ask are far from the consummate Web search solution: they do not typically produce direct answers to queries but instead typically recommend a selection of related documents from the Web. We note that in more recent years, search engines have begun to provide direct answers to prose queries matching certain common templates-for example, "population of china" or "12 euro in dollars"-but again, such functionality is limited to a small subset of popular user queries. Furthermore, search engines now provide individual and focused search interfaces over images, videos, locations, news articles, books, research papers, blogs, and real-time social media-although these tools are inarguably powerful, they are limited to their respective domains. In the general case, search engines are not suitable for complex information gathering tasks requiring aggregation from multiple indexed documents: for such tasks, users must manually aggregate tidbits of pertinent information from various pages. In effect, such limitations are predicated on the lack of machine-interpretable structure in HTML-documents, which is often limited to generic markup tags mainly concerned with document renderign and linking. Most of the real content is contained in prose text which is inherently difficult for machines to interpret.
    Object
    Semantic Web Search Engine
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  18. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.15
    0.15254721 = product of:
      0.30509442 = sum of:
        0.07465562 = weight(_text_:search in 1149) [ClassicSimilarity], result of:
          0.07465562 = score(doc=1149,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.43445963 = fieldWeight in 1149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.2304388 = sum of:
          0.1768519 = weight(_text_:engine in 1149) [ClassicSimilarity], result of:
            0.1768519 = score(doc=1149,freq=4.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.6686872 = fieldWeight in 1149, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.053586908 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.053586908 = score(doc=1149,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  19. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.15
    0.15173718 = product of:
      0.20231625 = sum of:
        0.050382458 = weight(_text_:web in 3144) [ClassicSimilarity], result of:
          0.050382458 = score(doc=3144,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 3144, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
        0.07377557 = weight(_text_:search in 3144) [ClassicSimilarity], result of:
          0.07377557 = score(doc=3144,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 3144, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
        0.07815824 = product of:
          0.15631647 = sum of:
            0.15631647 = weight(_text_:engine in 3144) [ClassicSimilarity], result of:
              0.15631647 = score(doc=3144,freq=8.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.59104156 = fieldWeight in 3144, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
  20. Stuart, D.: Web metrics for library and information professionals (2014) 0.15
    0.14830285 = product of:
      0.19773714 = sum of:
        0.13037933 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.13037933 = score(doc=2274,freq=82.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.04000242 = weight(_text_:search in 2274) [ClassicSimilarity], result of:
          0.04000242 = score(doc=2274,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23279473 = fieldWeight in 2274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.027355384 = product of:
          0.05471077 = sum of:
            0.05471077 = weight(_text_:engine in 2274) [ClassicSimilarity], result of:
              0.05471077 = score(doc=2274,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.20686457 = fieldWeight in 2274, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software

Languages

Types

  • a 1757
  • el 216
  • m 169
  • s 60
  • x 37
  • r 15
  • b 5
  • i 1
  • n 1
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications