Search (200 results, page 1 of 10)

  • × theme_ss:"Suchmaschinen"
  1. Koch, T.: Quality-controlled subject gateways : definitions, typologies, empirical overview (2000) 0.15
    0.15263343 = product of:
      0.22895013 = sum of:
        0.12552495 = weight(_text_:systematic in 631) [ClassicSimilarity], result of:
          0.12552495 = score(doc=631,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.44203353 = fieldWeight in 631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=631)
        0.103425175 = sum of:
          0.05630404 = weight(_text_:indexing in 631) [ClassicSimilarity], result of:
            0.05630404 = score(doc=631,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29604656 = fieldWeight in 631, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=631)
          0.047121134 = weight(_text_:22 in 631) [ClassicSimilarity], result of:
            0.047121134 = score(doc=631,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 631, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=631)
      0.6666667 = coord(2/3)
    
    Abstract
    'Quality-controlled subject gateways' are Internet services which apply a rich set of quality measures to support systematic resource discovery. Considerable manual effort is used to secure a selection of resources which meet quality criteria and to display a rich description of these resources with standards-based metadata. Regular checking and updating ensure good collection management. A main goal is to provide a high quality of subject access through indexing resources using controlled vocabularies and by offering a deep classification structure for advanced searching and browsing. This article provides an initial empirical overview of existing services of this kind, their approaches and technologies, based on proposed working definitions and typologies of subject gateways
    Date
    22. 6.2002 19:37:55
  2. Koch, T.: Searching the Web : systematic overview over indexes (1995) 0.07
    0.07172854 = product of:
      0.21518563 = sum of:
        0.21518563 = weight(_text_:systematic in 3169) [ClassicSimilarity], result of:
          0.21518563 = score(doc=3169,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.7577718 = fieldWeight in 3169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.09375 = fieldNorm(doc=3169)
      0.33333334 = coord(1/3)
    
  3. Su, L.T.: Developing a comprehensive and systematic model of user evaluation of Web-based search engines (1997) 0.07
    0.07172854 = product of:
      0.21518563 = sum of:
        0.21518563 = weight(_text_:systematic in 317) [ClassicSimilarity], result of:
          0.21518563 = score(doc=317,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.7577718 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.09375 = fieldNorm(doc=317)
      0.33333334 = coord(1/3)
    
  4. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 2117) [ClassicSimilarity], result of:
          0.08966068 = score(doc=2117,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 2117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2117)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
              0.033657953 = score(doc=2117,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    24. 1.2004 18:27:22
  5. Koch, T.: Searching the Web : systematic overview over indexes (1995) 0.06
    0.059773788 = product of:
      0.17932136 = sum of:
        0.17932136 = weight(_text_:systematic in 3205) [ClassicSimilarity], result of:
          0.17932136 = score(doc=3205,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.6314765 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.078125 = fieldNorm(doc=3205)
      0.33333334 = coord(1/3)
    
  6. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.06
    0.055101942 = product of:
      0.16530582 = sum of:
        0.16530582 = sum of:
          0.11145309 = weight(_text_:indexing in 1149) [ClassicSimilarity], result of:
            0.11145309 = score(doc=1149,freq=6.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.5860202 = fieldWeight in 1149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.053852726 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.053852726 = score(doc=1149,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Theme
    Citation indexing
  7. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : I. Theory and background (2003) 0.05
    0.05176563 = product of:
      0.15529688 = sum of:
        0.15529688 = weight(_text_:systematic in 5164) [ClassicSimilarity], result of:
          0.15529688 = score(doc=5164,freq=6.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.54687476 = fieldWeight in 5164, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5164)
      0.33333334 = coord(1/3)
    
    Abstract
    The project proposes and tests a comprehensive and systematic model of user evaluation of Web search engines. The project contains two parts. Part I describes the background and the model including a set of criteria and measures, and a method for implementation. It includes a literature review for two periods. The early period (1995-1996) portrays the settings for developing the model and the later period (1997-2000) places two applications of the model among contemporary evaluation work. Part II presents one of the applications that investigated the evaluation of four major search engines by 36 undergraduates from three academic disciplines. It reports results from statistical analyses of quantitative data for the entire sample and among disciplines, and content analysis of verbal data containing users' reasons for satisfaction. The proposed model aims to provide systematic feedback to engine developers or service providers for system improvement and to generate useful insight for system design and tool choice. The model can be applied to evaluating other compatible information retrieval systems or information retrieval (IR) techniques. It intends to contribute to developing a theory of relevance that goes beyond topicality to include value and usefulness for designing user-oriented information retrieval systems.
  8. Gourbin, G.: ¬Une nouvelle profession : cyber-documentaliste l'exemple de Nomade (1998) 0.04
    0.042249024 = product of:
      0.12674707 = sum of:
        0.12674707 = sum of:
          0.079625934 = weight(_text_:indexing in 2980) [ClassicSimilarity], result of:
            0.079625934 = score(doc=2980,freq=4.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.41867304 = fieldWeight in 2980, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2980)
          0.047121134 = weight(_text_:22 in 2980) [ClassicSimilarity], result of:
            0.047121134 = score(doc=2980,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 2980, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2980)
      0.33333334 = coord(1/3)
    
    Abstract
    Users who want to exploit all the information sources on the Web will need an efficient search and selection tool e.g. a directory or search engine. Directories list Web sites and analyze their contents. Describes the behind-the-scenes work of documentalists specialized in surfing, tracking and indexing French language sites for the directory Nomade. Describes the creation of Nomade, its functioning and indexing, and how this new profession of 'cyber-documentalist' is changing the practices and functions of information professionals as they become Internet information organizers
    Date
    1. 8.1996 22:01:00
  9. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.04
    0.041326456 = product of:
      0.12397936 = sum of:
        0.12397936 = sum of:
          0.083589815 = weight(_text_:indexing in 4190) [ClassicSimilarity], result of:
            0.083589815 = score(doc=4190,freq=6.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.4395151 = fieldWeight in 4190, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=4190)
          0.04038954 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
            0.04038954 = score(doc=4190,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.23214069 = fieldWeight in 4190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4190)
      0.33333334 = coord(1/3)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
  10. Schwartz, C.: Web search engines (1998) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 5700) [ClassicSimilarity], result of:
          0.10759281 = score(doc=5700,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 5700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5700)
      0.33333334 = coord(1/3)
    
    Abstract
    This reviews looks briefly at the history of WWW search engine development, considers the current state of affairs, and reflects on the future. Networked discovery tools have evolved along with Internet resource availability. WWW search engines display some complexity in their variety, content, resource acquisition strategies, and in the array of tools the deploy to assist users. A small but growing body of evaluation literature, much of it not systematic in nature, indicates that performance effectiveness is difficult to assess in this setting. Significant improvements in general-content search engine retrieval and ranking performance may not be possible, and are probalby not worth the effort, although search engine providers have introduced some rudimentary attempts at personalization, summarization, and query expansion. The shift to distributed search across multitype database systems could extend general networked discovery and retrieval to include smaller resource collections with rich metadata and navigation tools
  11. Zins, C.: Models for classifying Internet resources (2002) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 1160) [ClassicSimilarity], result of:
          0.10759281 = score(doc=1160,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 1160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1160)
      0.33333334 = coord(1/3)
    
    Abstract
    Designing systematic access to Internet resources is a major item an the agenda of researchers and practitioners in the field of information science, and is the focus of this study. A critical analysis of classification schemes used in major portals and Web classified directories exposes inconsistencies in the way they classify Internet resources. The inconsistencies indicate that the developers fall to differentiate the various classificatory models, and are unaware of their different rationales. The study establishes eight classificatory models for resources available to Internet users. Internet resources can be classified by subjects, objects, applications, users, locations, reference sources, media, and languages. The first five models are contentrelated; namely they characterize the content of the resource. The other three models are formst-related; namely they characterize the format of the resource or its technological infrastructure. The study identifies and formulates the eight classificatory models, analyzes their rationales, and discusses alternative ways to combine them in a faceted integrated classification scheme.
  12. Wichor, M.B.: Variation in number of hits for complex searches in Google Scholar (2016) 0.04
    0.03586427 = product of:
      0.10759281 = sum of:
        0.10759281 = weight(_text_:systematic in 2909) [ClassicSimilarity], result of:
          0.10759281 = score(doc=2909,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 2909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=2909)
      0.33333334 = coord(1/3)
    
    Abstract
    Google Scholar is often used to search for medical literature. Numbers of results reported by Google Scholar outperform the numbers reported by traditional databases. How reliable are these numbers? Why are often not all available 1,000 references shown? Methods: For several complex search strategies used in systematic review projects, the number of citations and the total number of versions were calculated. Several search strategies were followed over a two-year period, registering fluctuations in reported search results. Results: Changes in numbers of reported search results varied enormously between search strategies and dates. Theories for calculations of the reported and shown number of hits were not proved. Conclusions: The number of hits reported in Google Scholar is an unreliable measure. Therefore, its repeatability is problematic, at least when equal results are needed.
  13. Wiley, D.L.: Beyond information retrieval : ways to provide content in context (1998) 0.03
    0.03447506 = product of:
      0.103425175 = sum of:
        0.103425175 = sum of:
          0.05630404 = weight(_text_:indexing in 3647) [ClassicSimilarity], result of:
            0.05630404 = score(doc=3647,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29604656 = fieldWeight in 3647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3647)
          0.047121134 = weight(_text_:22 in 3647) [ClassicSimilarity], result of:
            0.047121134 = score(doc=3647,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 3647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3647)
      0.33333334 = coord(1/3)
    
    Abstract
    The days of the traditional abstracting and indexing services are waning, as abstracts and bibliographic data become commodities. However, there are tremedous opportunities for those organizations willing to look beyond the status quo to the new possibilities enabled by the latest wave of advanced technologies. Those who own content need to focus on the delivery mechanisms and new markets that technology can provide. Features like automatic extraction of key concepts or names, collaborative filtering to help with trend analysis, and visualization techniques can take information past the retrieval stage and into the management area
    Source
    Database. 21(1998) no.4, S.18-22
  14. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.03
    0.03447506 = product of:
      0.103425175 = sum of:
        0.103425175 = sum of:
          0.05630404 = weight(_text_:indexing in 3276) [ClassicSimilarity], result of:
            0.05630404 = score(doc=3276,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.29604656 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
          0.047121134 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
            0.047121134 = score(doc=3276,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.2708308 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
      0.33333334 = coord(1/3)
    
    Abstract
    Im Rahmen des klassischen Information Retrieval wurden verschiedene Verfahren für das Ranking sowie die Suche in einer homogenen strukturlosen Dokumentenmenge entwickelt. Die Erfolge der Suchmaschine Google haben gezeigt dass die Suche in einer zwar inhomogenen aber zusammenhängenden Dokumentenmenge wie dem Internet unter Berücksichtigung der Dokumentenverbindungen (Links) sehr effektiv sein kann. Unter den von der Suchmaschine Google realisierten Konzepten ist ein Verfahren zum Ranking von Suchergebnissen (PageRank), das in diesem Artikel kurz erklärt wird. Darüber hinaus wird auf die Konzepte eines Systems namens CiteSeer eingegangen, welches automatisch bibliographische Angaben indexiert (engl. Autonomous Citation Indexing, ACI). Letzteres erzeugt aus einer Menge von nicht vernetzten wissenschaftlichen Dokumenten eine zusammenhängende Dokumentenmenge und ermöglicht den Einsatz von Banking-Verfahren, die auf den von Google genutzten Verfahren basieren.
    Date
    20. 3.2005 16:23:22
  15. Markey, K.: Twenty-five years of end-user searching : part 2: future research directions (2007) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 443) [ClassicSimilarity], result of:
          0.08966068 = score(doc=443,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=443)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the second part of a two-part article that examines 25 years of published research findings on end-user searching of online information retrieval (IR) systems. In Part 1, it was learned that people enter a few short search statements into online IR systems. Their searches do not resemble the systematic approach of expert searchers who use the full range of IR-system functionality. Part 2 picks up the discussion of research findings about end-user searching in the context of current information retrieval models. These models demonstrate that information retrieval is a complex event, involving changes in cognition, feelings, and/or events during the information seeking process. The author challenges IR researchers to design new studies of end-user searching, collecting data not only on system-feature use, but on multiple search sessions and controlling for variables such as domain knowledge expertise and expert system knowledge. Because future IR systems designers are likely to improve the functionality of online IR systems in response to answers to the new research questions posed here, the author concludes with advice to these designers about retaining the simplicity of online IR system interfaces.
  16. Kruschwitz, U.; Lungley, D.; Albakour, M-D.; Song, D.: Deriving query suggestions for site search (2013) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 1085) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1085,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1085)
      0.33333334 = coord(1/3)
    
    Abstract
    Modern search engines have been moving away from simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features are now integral parts of web search engines. However, generating good query modification suggestions remains a challenging issue. Query log analysis is one of the major strands of work in this direction. Although much research has been performed on query logs collected on the web as a whole, query log analysis to enhance search on smaller and more focused collections has attracted less attention, despite its increasing practical importance. In this article, we report on a systematic study of different query modification methods applied to a substantial query log collected on a local website that already uses an interactive search engine. We conducted experiments in which we asked users to assess the relevance of potential query modification suggestions that have been constructed using a range of log analysis methods and different baseline approaches. The experimental results demonstrate the usefulness of log analysis to extract query modification suggestions. Furthermore, our experiments demonstrate that a more fine-grained approach than grouping search requests into sessions allows for extraction of better refinement terms from query log files.
  17. Roy, R.S.; Agarwal, S.; Ganguly, N.; Choudhury, M.: Syntactic complexity of Web search queries through the lenses of language models, networks and users (2016) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 3188) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3188,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3188)
      0.33333334 = coord(1/3)
    
    Abstract
    Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication.
  18. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 3593) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3593,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3593)
      0.33333334 = coord(1/3)
    
    Abstract
    We present the first systematic study of the influence of time on user judgements for rankings and relevance grades of web search engine results. The goal of this study is to evaluate the change in user assessment of search results and explore how users' judgements change. To this end, we conducted a large-scale user study with 86 participants who evaluated 2 different queries and 4 diverse result sets twice with an interval of 2 months. To analyze the results we investigate whether 2 types of patterns of user behavior from the theory of categorical thinking hold for the case of evaluation of search results: (a) coarseness and (b) locality. To quantify these patterns we devised 2 new measures of change in user judgements and distinguish between local (when users swap between close ranks and relevance values) and nonlocal changes. Two types of judgements were considered in this study: (a) relevance on a 4-point scale, and (b) ranking on a 10-point scale without ties. We found that users tend to change their judgements of the results over time in about 50% of cases for relevance and in 85% of cases for ranking. However, the majority of these changes were local.
  19. Mukherjea, S.; Hirata, K.; Hara, Y.: Towards a multimedia World-Wide Web information retrieval engine (1997) 0.03
    0.029550051 = product of:
      0.08865015 = sum of:
        0.08865015 = sum of:
          0.048260607 = weight(_text_:indexing in 2678) [ClassicSimilarity], result of:
            0.048260607 = score(doc=2678,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.2537542 = fieldWeight in 2678, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=2678)
          0.04038954 = weight(_text_:22 in 2678) [ClassicSimilarity], result of:
            0.04038954 = score(doc=2678,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.23214069 = fieldWeight in 2678, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2678)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes a search engine that integrate text and image search. 1 or more Web site can be indexed for both textual and image information, allowing the user to search based on keywords or images or both. Another problem with the current search engines is that they show the results as pages of scrolled lists; this is not very user-friendly. The search engine allows the user to visualise to results in various ways. Explains the indexing and searching techniques of the search engine and highlights several features of the querying interface to make the retrieval process more efficient. Use examples to show the usefulness of the technology
    Date
    1. 8.1996 22:08:06
  20. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.03
    0.026304156 = product of:
      0.07891247 = sum of:
        0.07891247 = product of:
          0.2367374 = sum of:
            0.2367374 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.2367374 = score(doc=2514,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.

Languages

  • e 113
  • d 82
  • f 3
  • nl 2
  • More… Less…

Types

  • a 170
  • el 17
  • m 14
  • s 3
  • p 2
  • x 2
  • r 1
  • More… Less…