Search (184 results, page 1 of 10)

  • × theme_ss:"Suchmaschinen"
  1. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.12
    0.121517956 = product of:
      0.3037949 = sum of:
        0.057946928 = product of:
          0.17384078 = sum of:
            0.17384078 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.17384078 = score(doc=2514,freq=2.0), product of:
                0.3093153 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036484417 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.24584797 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.24584797 = score(doc=2514,freq=4.0), product of:
            0.3093153 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036484417 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  2. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.04
    0.042841177 = product of:
      0.10710294 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 2605) [ClassicSimilarity], result of:
              0.04120336 = score(doc=2605,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 2605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2605)
          0.5 = coord(1/2)
        0.086501256 = product of:
          0.17300251 = sum of:
            0.17300251 = weight(_text_:exercises in 2605) [ClassicSimilarity], result of:
              0.17300251 = score(doc=2605,freq=4.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.6667425 = fieldWeight in 2605, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2605)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  3. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.03
    0.026831081 = product of:
      0.13415541 = sum of:
        0.13415541 = sum of:
          0.094610326 = weight(_text_:etc in 4996) [ClassicSimilarity], result of:
            0.094610326 = score(doc=4996,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.47875473 = fieldWeight in 4996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0625 = fieldNorm(doc=4996)
          0.039545078 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
            0.039545078 = score(doc=4996,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.30952093 = fieldWeight in 4996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4996)
      0.2 = coord(1/5)
    
    Content
    "Wir kennen das bereits von Google, Facebook und Amazon: Unser Internet-Verhalten wird automatisch erfasst, damit uns angepasste Inhalte präsentiert werden können. Ob uns diese Inhalte gefallen oder nicht, melden wir direkt oder indirekt zurück (Kauf, Klick etc.). Durch diese Feedbackschleife lernen solche Systeme immer besser, was sie uns präsentieren müssen, um unsere Bedürfnisse anzusprechen, und wissen implizit dadurch auch immer besser, wie sie unsere Bedürfniserfüllung - zur Konsumtion - manipulieren können."
    Date
    19. 2.2019 17:22:00
  4. Chang, C.-H.; Hsu, C.-C.: Customizable multi-engine search tool with clustering (1997) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 2670) [ClassicSimilarity], result of:
              0.048070587 = score(doc=2670,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 2670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2670)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 2670) [ClassicSimilarity], result of:
              0.034601945 = score(doc=2670,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 2670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2670)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Proposes a new idea of searching under the multi-engine search architecture to overcome the problems associated with relevance ranking. These include clustering of the search results and extraction of co-occurence keywords, which, with the user's feedback, better refines the query in the searching process. The system also provides the construction of the concept space to gradually customize the search tool to fit the usage for the user at the same time
    Date
    1. 8.1996 22:08:06
  5. Williamson, N.J.: Knowledge structures and the Internet : progress and prospects (2006) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 238) [ClassicSimilarity], result of:
              0.048070587 = score(doc=238,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=238)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 238) [ClassicSimilarity], result of:
              0.034601945 = score(doc=238,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=238)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper analyses the development of the knowledge structures provided as aids to users in searching the Internet. Specific focus is given to web directories, thesauri and gateways and portals. The paper assumes that users need to be able to access information in two ways - to locate information on a subject directly in response to a search term and to be able to browse so as to familiarize themselves with a domain or to refine a request. Emphasis is to the browsing aspect. Background and development are addressed. Structures are analyzed, problems are identified, and future directions discussed.
    Date
    27.12.2008 15:56:22
  6. Calishain, T.; Dornfest, R.; Adam, D.J.: Google Pocket Guide (2003) 0.01
    0.014484189 = product of:
      0.07242095 = sum of:
        0.07242095 = product of:
          0.1448419 = sum of:
            0.1448419 = weight(_text_:etc in 6) [ClassicSimilarity], result of:
              0.1448419 = score(doc=6,freq=12.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.7329405 = fieldWeight in 6, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    LCSH
    Google / Handbooks, manuals, etc.
    Web search engines / Handbooks, manuals, etc.
    Internet searching / Handbooks, manuals, etc.
    Subject
    Google / Handbooks, manuals, etc.
    Web search engines / Handbooks, manuals, etc.
    Internet searching / Handbooks, manuals, etc.
  7. Bryan, K.; Leise, T.: ¬The $25.000.000.000 eigenvector : the linear algebra behind Google 0.01
    0.014271979 = product of:
      0.071359895 = sum of:
        0.071359895 = product of:
          0.14271979 = sum of:
            0.14271979 = weight(_text_:exercises in 1353) [ClassicSimilarity], result of:
              0.14271979 = score(doc=1353,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5500345 = fieldWeight in 1353, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1353)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Google's success derives in large part from its PageRank algorithm, which ranks the importance of webpages according to an eigenvector of a weighted link matrix. Analysis of the PageRank formula provides a wonderful applied topic for a linear algebra course. Instructors may assign this article as a project to more advanced students, or spend one or two lectures presenting the material with assigned homework from the exercises. This material also complements the discussion of Markov chains in matrix algebra. Maple and Mathematica files supporting this material can be found at www.rose-hulman.edu/~bryan.
  8. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.01
    0.011810362 = product of:
      0.029525906 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 2117) [ClassicSimilarity], result of:
              0.034336135 = score(doc=2117,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
        0.0123578375 = product of:
          0.024715675 = sum of:
            0.024715675 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
              0.024715675 = score(doc=2117,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19345059 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based an actual interaction with the search engines. User evaluation was based an 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic an all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search Interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1 relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions an all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based an these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback an strengths and weaknesses of search engines for system improvement
    Date
    24. 1.2004 18:27:22
  9. Alqaraleh, S.; Ramadan, O.; Salamah, M.: Efficient watcher based web crawler design (2015) 0.01
    0.011810362 = product of:
      0.029525906 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 1627) [ClassicSimilarity], result of:
              0.034336135 = score(doc=1627,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 1627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1627)
          0.5 = coord(1/2)
        0.0123578375 = product of:
          0.024715675 = sum of:
            0.024715675 = weight(_text_:22 in 1627) [ClassicSimilarity], result of:
              0.024715675 = score(doc=1627,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19345059 = fieldWeight in 1627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1627)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to design a watcher-based crawler (WBC) that has the ability of crawling static and dynamic web sites, and can download only the updated and newly added web pages. Design/methodology/approach In the proposed WBC crawler, a watcher file, which can be uploaded to the web sites servers, prepares a report that contains the addresses of the updated and the newly added web pages. In addition, the WBC is split into five units, where each unit is responsible for performing a specific crawling process. Findings Several experiments have been conducted and it has been observed that the proposed WBC increases the number of uniquely visited static and dynamic web sites as compared with the existing crawling techniques. In addition, the proposed watcher file not only allows the crawlers to visit the updated and newly web pages, but also solves the crawlers overlapping and communication problems. Originality/value The proposed WBC performs all crawling processes in the sense that it detects all updated and newly added pages automatically without any human explicit intervention or downloading the entire web sites.
    Date
    20. 1.2015 18:30:22
  10. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.01
    0.011216111 = product of:
      0.028040275 = sum of:
        0.01030084 = product of:
          0.02060168 = sum of:
            0.02060168 = weight(_text_:problems in 754) [ClassicSimilarity], result of:
              0.02060168 = score(doc=754,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.13680777 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.5 = coord(1/2)
        0.017739436 = product of:
          0.03547887 = sum of:
            0.03547887 = weight(_text_:etc in 754) [ClassicSimilarity], result of:
              0.03547887 = score(doc=754,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.17953302 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The preliminary intended and approved list was: Section 1: To concentrate on Google as virtual monopoly, and Google's reported support of Wikipedia. To find experimental evidence of this support or show that the reports are not more than rumours. Section 2: To address the copy-past syndrome with socio-cultural consequences associated with it. Section 3: To deal with plagiarism and IPR violations as two intertwined topics: how they affect various players (teachers and pupils in school; academia; corporations; governmental studies, etc.). To establish that not enough is done concerning these issues, partially due to just plain ignorance. We will propose some ways to alleviate the problem. Section 4: To discuss the usual tools to fight plagiarism and their shortcomings. Section 5: To propose ways to overcome most of above problems according to proposals by Maurer/Zaka. To examples, but to make it clear that do this more seriously a pilot project is necessary beyond this particular study. Section 6: To briefly analyze various views of plagiarism as it is quite different in different fields (journalism, engineering, architecture, painting, .) and to present a concept that avoids plagiarism from the very beginning. Section 7: To point out the many other dangers of Google or Google-like undertakings: opportunistic ranking, analysis of data as window into commercial future. Section 8: To outline the need of new international laws. Section 9: To mention the feeble European attempts to fight Google, despite Google's growing power. Section 10. To argue that there is no way to catch up with Google in a frontal attack.
  11. Averesch, D.: Googeln ohne Google : Mit alternativen Suchmaschinen gelingt ein neutraler Überblick (2010) 0.01
    0.010061655 = product of:
      0.050308276 = sum of:
        0.050308276 = sum of:
          0.03547887 = weight(_text_:etc in 3374) [ClassicSimilarity], result of:
            0.03547887 = score(doc=3374,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.17953302 = fieldWeight in 3374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3374)
          0.014829405 = weight(_text_:22 in 3374) [ClassicSimilarity], result of:
            0.014829405 = score(doc=3374,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.116070345 = fieldWeight in 3374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3374)
      0.2 = coord(1/5)
    
    Content
    Wer den großen Google-Konkurrenten erst einmal im Blindtest auf den Zahn fühlen will, kann das unter http://blindsearch.fejus.com tun. Die Suchergebnisse werden im gleichen Design in drei Spal- ten nebeneinander dargestellt. Erst, wenn der Nutzer sein Votum abgegeben hat, in welcher Spalte die seiner Meinung nach besten Ergebnisse stehen, lüftet die Seite das Geheimnis und zeigt die Logos von Bing, Yahoo und Google an. Der Verein Suma zieht das Fazit, dass "The Big Three" qualitativ gleichwertig seien. Am Tempo gibt es bei den großen Suchmaschinen nichts zu bemängeln. Alle drei spucken ihre Ergebnisse zügig aus. Google und Yahoo zeigen beim Tippen Suchvorschläge an und verfügen über einen Kinder- und Jugendschutzfilter. Letzterer lässt sich auch bei Bing einschalten. Auf die Booleschen Operatoren ("AND", "OR" etc.), die Suchbegriffe logisch verknüpfen, verstehen sich die meisten Suchmaschinen. Yahoo bietet zusätzlich die Suche mit haus- gemachten Abkürzungen an. Shortcuts für die fixe Suche nach Aktienkursen, Call-byCall-Vorwahlen, dem Wetter oder eine Taschenrechnerfunktion finden sich unter http://de.search.yahoo.com/info/shortcuts. Vergleichbar ist das Funktionsangebot von Google, das unter www.google.com/intl/de/help/features.html aufgelistet ist. Das Unternehmen bietet auch die Volltextsuche in Büchern, eine Suche in wissenschaftlichen Veröffentlichungen oder die Recherche nach öffentlich verfügbarem Programmiercodes an. Bei den großen Maschinen lassen sich in der erweiterten Suche auch Parameter wie Sprachraum, Region, Dateityp, Position des Suchbegriffs auf der Seite, Zeitraum der letzten Aktualisierung und Nutzungsrechte einbeziehen. Ganz so weit ist die deutsche Suche von Ask, die sich noch im Betastudium befindet, noch nicht (http://de.ask.com). Praktisch ist aber die Voran-sicht der Seiten in einem Popup-Fenster beim Mouseover über das Fernglas-Symbol vor den Suchbegriffen. Die globale Ask-Suche (www.ask.com) ist schon weiter und zeigt wie Google direkt auch Bilder zu den relevantesten Foto- und Video-Suchergebnissen an.
    Date
    3. 5.1997 8:44:22
  12. Smith, A.G.: Search features of digital libraries (2000) 0.01
    0.01003494 = product of:
      0.050174702 = sum of:
        0.050174702 = product of:
          0.100349404 = sum of:
            0.100349404 = weight(_text_:etc in 940) [ClassicSimilarity], result of:
              0.100349404 = score(doc=940,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.50779605 = fieldWeight in 940, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=940)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Traditional on-line search services such as Dialog, DataStar and Lexis provide a wide range of search features (boolean and proximity operators, truncation, etc). This paper discusses the use of these features for effective searching, and argues that these features are required, regardless of advances in search engine technology. The literature on on-line searching is reviewed, identifying features that searchers find desirable for effective searching. A selective survey of current digital libraries available on the Web was undertaken, identifying which search features are present. The survey indicates that current digital libraries do not implement a wide range of search features. For instance: under half of the examples included controlled vocabulary, under half had proximity searching, only one enabled browsing of term indexes, and none of the digital libraries enable searchers to refine an initial search. Suggestions are made for enhancing the search effectiveness of digital libraries; for instance, by providing a full range of search operators, enabling browsing of search terms, enhancement of records with controlled vocabulary, enabling the refining of initial searches, etc.
  13. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.01
    0.01003494 = product of:
      0.050174702 = sum of:
        0.050174702 = product of:
          0.100349404 = sum of:
            0.100349404 = weight(_text_:etc in 3185) [ClassicSimilarity], result of:
              0.100349404 = score(doc=3185,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.50779605 = fieldWeight in 3185, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3185)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    LCSH
    Internet searching / Handbooks, manuals, etc
    Subject
    Internet searching / Handbooks, manuals, etc
  14. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.01
    0.008388779 = product of:
      0.041943893 = sum of:
        0.041943893 = product of:
          0.083887786 = sum of:
            0.083887786 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.083887786 = score(doc=3227,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  15. Höfer, W.: Detektive im Web (1999) 0.01
    0.008388779 = product of:
      0.041943893 = sum of:
        0.041943893 = product of:
          0.083887786 = sum of:
            0.083887786 = weight(_text_:22 in 4007) [ClassicSimilarity], result of:
              0.083887786 = score(doc=4007,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.6565931 = fieldWeight in 4007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4007)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.1999 20:22:06
  16. Rensman, J.: Blick ins Getriebe (1999) 0.01
    0.008388779 = product of:
      0.041943893 = sum of:
        0.041943893 = product of:
          0.083887786 = sum of:
            0.083887786 = weight(_text_:22 in 4009) [ClassicSimilarity], result of:
              0.083887786 = score(doc=4009,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.6565931 = fieldWeight in 4009, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4009)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.1999 21:22:59
  17. Duchemin, P.-Y.: ¬La recherche d'informations sur l'internet : repertoires et moteurs de recherche (1997) 0.01
    0.008278403 = product of:
      0.041392017 = sum of:
        0.041392017 = product of:
          0.082784034 = sum of:
            0.082784034 = weight(_text_:etc in 884) [ClassicSimilarity], result of:
              0.082784034 = score(doc=884,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.41891038 = fieldWeight in 884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=884)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Internet links computer networks worldwide through the TCP/IP; in addition to electronic mail; bulleton board and news group services, files can be downloaded using the standard protocol FTP. Services have evolved to identify and facilitate access to Internet resources, e.g. Telnet, Gopher, WAIS, etc. The WWW is the most developed, using hypertext links. Search engines such as AltaVista explore Web content and create catalogues of Web pages. Gives details of the most commonly used subject guides, research tools and search engines, including URL and applications
  18. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.008155417 = product of:
      0.040777083 = sum of:
        0.040777083 = product of:
          0.08155417 = sum of:
            0.08155417 = weight(_text_:exercises in 3346) [ClassicSimilarity], result of:
              0.08155417 = score(doc=3346,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31430542 = fieldWeight in 3346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  19. Stock, M.; Stock, W.G.: Recherchieren im Internet (2004) 0.01
    0.007909016 = product of:
      0.039545078 = sum of:
        0.039545078 = product of:
          0.079090156 = sum of:
            0.079090156 = weight(_text_:22 in 4686) [ClassicSimilarity], result of:
              0.079090156 = score(doc=4686,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.61904186 = fieldWeight in 4686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4686)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27.11.2005 18:04:22
  20. Ozmutlu, S.; Cosar, G.C.: Analyzing the results of automatic new topic identification (2008) 0.01
    0.0071366318 = product of:
      0.03568316 = sum of:
        0.03568316 = product of:
          0.07136632 = sum of:
            0.07136632 = weight(_text_:problems in 2604) [ClassicSimilarity], result of:
              0.07136632 = score(doc=2604,freq=6.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.47391602 = fieldWeight in 2604, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2604)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - Identification of topic changes within a user search session is a key issue in content analysis of search engine user queries. Recently, various studies have focused on new topic identification/session identification of search engine transaction logs, and several problems regarding the estimation of topic shifts and continuations were observed in these studies. This study aims to analyze the reasons for the problems that were encountered as a result of applying automatic new topic identification. Design/methodology/approach - Measures, such as cleaning the data of common words and analyzing the errors of automatic new topic identification, are applied to eliminate the problems in estimating topic shifts and continuations. Findings - The findings show that the resulting errors of automatic new topic identification have a pattern, and further research is required to improve the performance of automatic new topic identification. Originality/value - Improving the performance of automatic new topic identification would be valuable to search engine designers, so that they can develop new clustering and query recommendation algorithms, as well as custom-tailored graphical user interfaces for search engine users.

Years

Languages

  • e 97
  • d 83
  • f 2
  • nl 1
  • More… Less…

Types

  • a 158
  • el 17
  • m 14
  • x 3
  • p 2
  • r 1
  • More… Less…