Search (229 results, page 1 of 12)

  • × theme_ss:"Suchmaschinen"
  1. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.15
    0.15007444 = product of:
      0.3751861 = sum of:
        0.07156433 = product of:
          0.214693 = sum of:
            0.214693 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.214693 = score(doc=2514,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.30362174 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.30362174 = score(doc=2514,freq=4.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  2. Wiley, D.L.: Beyond information retrieval : ways to provide content in context (1998) 0.05
    0.0521564 = product of:
      0.130391 = sum of:
        0.052814763 = weight(_text_:bibliographic in 3647) [ClassicSimilarity], result of:
          0.052814763 = score(doc=3647,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.30108726 = fieldWeight in 3647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3647)
        0.077576235 = sum of:
          0.0348429 = weight(_text_:data in 3647) [ClassicSimilarity], result of:
            0.0348429 = score(doc=3647,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24455236 = fieldWeight in 3647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3647)
          0.04273333 = weight(_text_:22 in 3647) [ClassicSimilarity], result of:
            0.04273333 = score(doc=3647,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.2708308 = fieldWeight in 3647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3647)
      0.4 = coord(2/5)
    
    Abstract
    The days of the traditional abstracting and indexing services are waning, as abstracts and bibliographic data become commodities. However, there are tremedous opportunities for those organizations willing to look beyond the status quo to the new possibilities enabled by the latest wave of advanced technologies. Those who own content need to focus on the delivery mechanisms and new markets that technology can provide. Features like automatic extraction of key concepts or names, collaborative filtering to help with trend analysis, and visualization techniques can take information past the retrieval stage and into the management area
    Source
    Database. 21(1998) no.4, S.18-22
  3. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.03696416 = product of:
      0.09241039 = sum of:
        0.075167626 = weight(_text_:readable in 4709) [ClassicSimilarity], result of:
          0.075167626 = score(doc=4709,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 4709) [ClassicSimilarity], result of:
              0.03448553 = score(doc=4709,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 4709, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  4. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.03
    0.025045047 = product of:
      0.062612616 = sum of:
        0.03772483 = weight(_text_:bibliographic in 3091) [ClassicSimilarity], result of:
          0.03772483 = score(doc=3091,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 3091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.024887787 = product of:
          0.049775574 = sum of:
            0.049775574 = weight(_text_:data in 3091) [ClassicSimilarity], result of:
              0.049775574 = score(doc=3091,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.34936053 = fieldWeight in 3091, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3091)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  5. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.02
    0.023347527 = product of:
      0.116737634 = sum of:
        0.116737634 = sum of:
          0.08621383 = weight(_text_:data in 1605) [ClassicSimilarity], result of:
            0.08621383 = score(doc=1605,freq=24.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.60511017 = fieldWeight in 1605, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
          0.030523809 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
            0.030523809 = score(doc=1605,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
      0.2 = coord(1/5)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
    Theme
    Data Mining
  6. Hock, R.E.: How to do field searching in Web search engines : a field trip (1998) 0.02
    0.02177759 = product of:
      0.108887956 = sum of:
        0.108887956 = sum of:
          0.03982046 = weight(_text_:data in 3601) [ClassicSimilarity], result of:
            0.03982046 = score(doc=3601,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2794884 = fieldWeight in 3601, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=3601)
          0.06906749 = weight(_text_:22 in 3601) [ClassicSimilarity], result of:
            0.06906749 = score(doc=3601,freq=4.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.4377287 = fieldWeight in 3601, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3601)
      0.2 = coord(1/5)
    
    Abstract
    Explains how 5 Internet search engines (AltaVista, HotBot, InfoSeek, Lycos, and Yahoo) handle field searching. Includes a chart which identifies where on a search engine's page a particular field is searched and the prefix syntax used, and gives examples. Details the individual fields that can be searched: data, title, URL, images, audiovideo and other page content, links and page depth
    Source
    Online. 22(1998) no.3, S.18-22
  7. Libraries and Google (2005) 0.02
    0.021069499 = product of:
      0.052673746 = sum of:
        0.037583813 = weight(_text_:readable in 1973) [ClassicSimilarity], result of:
          0.037583813 = score(doc=1973,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.1357629 = fieldWeight in 1973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.015625 = fieldNorm(doc=1973)
        0.015089932 = weight(_text_:bibliographic in 1973) [ClassicSimilarity], result of:
          0.015089932 = score(doc=1973,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.08602493 = fieldWeight in 1973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.015625 = fieldNorm(doc=1973)
      0.4 = coord(2/5)
    
    Footnote
    Weitere Rez. in JASIST 59(2008) H.9, S.1531-1533 (J. Satyanesan): "Libraries and Google is an interesting and enlightening compilation of 18 articles on Google and its impact on libraries. The topic is very current, debatable, and thought provoking. Google has profoundly empowered individuals and transformed access to information and librarians are very much concerned about its popularity and visibility. In this book, the leading authorities discuss the usefulness of Google, its influence and potential menace to libraries, and its implications for libraries and the scholarly communication. They offer practical suggestions to cope with the changing situation. The articles are written from different perspective and express all shades of opinion, both hopeful and fearful. One can discern varied moods-apprehension, resignation, encouragement, and motivation-on the part of the librarians. This is an important book providing a wealth of information for the 21st century librarian. There is a section called "Indexing, Abstracting & Website/Internet Coverage," which lists major indexing and abstracting services and other tools for bibliographic access. The format of the articles is uniform with an introduction, key words, and with the exception of two articles the rest have summaries and conclusions. References and notes of varying lengths are included in each article. This book has been copublished simultaneously as Internet Reference Quarterly, 10(3/4), 2005. Although there are single articles written on Google and libraries, this is the first book-length treatment of the topic.
    ... This book is written by library professionals and aimed at the librarians in particular, but it will be useful to others who may be interested in knowing what libraries are up to in the age of Google. It is intended for library science educators and students, library administrators, publishers and university presses. It is well organized, well researched, and easily readable. Article titles are descriptive, allowing the reader to find what he needs by scanning the table of contents or by consulting the index. The only flaw in this book is the lack of summary or conclusions in a few articles. The book is in paperback and has 240 pages. This book is a significant contribution and I highly recommend it."
  8. Joint, N.: ¬The one-stop shop search engine : a transformational library technology? ANTAEUS (2010) 0.02
    0.015089932 = product of:
      0.07544966 = sum of:
        0.07544966 = weight(_text_:bibliographic in 4201) [ClassicSimilarity], result of:
          0.07544966 = score(doc=4201,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.43012467 = fieldWeight in 4201, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4201)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this paper is to form one of a series which will give an overview of so-called "transformational" areas of digital library technology. The aim will be to assess how much real transformation these applications are bringing about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - An overview of the present state of development of the one-stop shop library search engine, with particular reference to its relationship with the underlying bibliographic databases to which it provides a simplified single interface. Findings - The paper finds that the success of federated searching has proved valuable but limited to date in creating a one-stop shop search engine to rival Google Scholar; but the persistent value of the bibliographic databases sitting underneath a federated search system means that a harvesting search engine could well answer the need for a true one-stop search engine for academic and scholarly information. Research limitations/implications - This paper is based on the hypothesis that Google's success in providing such an apparently high degree of access to electronic journal services is not what it seems, and that it does not render library discovery tools obsolete. It argues that Google has not diminished the pre-eminent role of library bibliographic databases in mediating access to e-journal text, although this hypothesis needs further research to validate or disprove it. Practical implications - The paper affirms the value of bibliographic databases to practitioner librarians and the potential of single interface discovery tools in library practice. Originality/value - The paper uses statistics from US LIS sources to shed light on UK discovery tool issues.
  9. Lewandowski, D.; Sünkler, S.: What does Google recommend when you want to compare insurance offerings? (2019) 0.01
    0.014726144 = product of:
      0.07363072 = sum of:
        0.07363072 = sum of:
          0.043106914 = weight(_text_:data in 5288) [ClassicSimilarity], result of:
            0.043106914 = score(doc=5288,freq=6.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.30255508 = fieldWeight in 5288, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5288)
          0.030523809 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
            0.030523809 = score(doc=5288,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 5288, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5288)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google's top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.
    Date
    20. 1.2015 18:30:22
  10. Fischer, T.; Neuroth, H.: SSG-FI - special subject gateways to high quality Internet resources for scientific users (2000) 0.01
    0.0132987825 = product of:
      0.06649391 = sum of:
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 4873) [ClassicSimilarity], result of:
            0.029865343 = score(doc=4873,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 4873, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=4873)
          0.036628567 = weight(_text_:22 in 4873) [ClassicSimilarity], result of:
            0.036628567 = score(doc=4873,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 4873, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4873)
      0.2 = coord(1/5)
    
    Abstract
    Project SSG-FI at SUB Göttingen provides special subject gateways to international high quality Internet resources for scientific users. Internet sites are selected by subject specialists and described using an extension of qualified Dublin Core metadata. A basic evaluation is added. These descriptions are freely available and can be searched and browsed. These are now subject gateways for 3 subject ares: earth sciences (GeoGuide); mathematics (MathGuide); and Anglo-American culture (split into HistoryGuide and AnglistikGuide). Together they receive about 3.300 'hard' requests per day, thus reaching over 1 million requests per year. The project SSG-FI behind these guides is open to collaboration. Institutions and private persons wishing to contribute can notify the SSG-FI team or send full data sets. Regular contributors can request registration with the project to access the database via the Internet and create and edit records
    Date
    22. 6.2002 19:40:42
  11. Loia, V.; Pedrycz, W.; Senatore, S.; Sessa, M.I.: Web navigation support by means of proximity-driven assistant agents (2006) 0.01
    0.013144091 = product of:
      0.065720454 = sum of:
        0.065720454 = sum of:
          0.035196647 = weight(_text_:data in 5283) [ClassicSimilarity], result of:
            0.035196647 = score(doc=5283,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24703519 = fieldWeight in 5283, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5283)
          0.030523809 = weight(_text_:22 in 5283) [ClassicSimilarity], result of:
            0.030523809 = score(doc=5283,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 5283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5283)
      0.2 = coord(1/5)
    
    Abstract
    The explosive growth of the Web and the consequent exigency of the Web personalization domain have gained a key position in the direction of customization of the Web information to the needs of specific users, taking advantage of the knowledge acquired from the analysis of the user's navigational behavior (usage data) in correlation with other information collected in the Web context, namely, structure, content, and user profile data. This work presents an agent-based framework designed to help a user in achieving personalized navigation, by recommending related documents according to the user's responses in similar-pages searching mode. Our agent-based approach is grounded in the integration of different techniques and methodologies into a unique platform featuring user profiling, fuzzy multisets, proximity-oriented fuzzy clustering, and knowledge-based discovery technologies. Each of these methodologies serves to solve one facet of the general problem (discovering documents relevant to the user by searching the Web) and is treated by specialized agents that ultimately achieve the final functionality through cooperation and task distribution.
    Date
    22. 7.2006 16:59:13
  12. Morgan, E.L.: Creating user-friendly electronic information systems (1997) 0.01
    0.012071946 = product of:
      0.060359728 = sum of:
        0.060359728 = weight(_text_:bibliographic in 1829) [ClassicSimilarity], result of:
          0.060359728 = score(doc=1829,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 1829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=1829)
      0.2 = coord(1/5)
    
    Abstract
    The effectiveness of an information system is related to its readability, browsability, searchability and interactive assistance. Interactive assistance provides customized help for particular users in particular situations. It can be proactive or restrictive. Systems have been developed for reference work and CD-ROM based bibliographic indexes. Prototype systems for the Internet include: Ask Alcuin, meta-search engines, and WebArcher
  13. Jascó, P.: Northern Light (1998) 0.01
    0.012071946 = product of:
      0.060359728 = sum of:
        0.060359728 = weight(_text_:bibliographic in 3310) [ClassicSimilarity], result of:
          0.060359728 = score(doc=3310,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 3310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3310)
      0.2 = coord(1/5)
    
    Abstract
    Northern Light is part WWW search engine and part full text database. The latter is called Special Collection and consists of fulltext articles from 1.800 journals, newswires and other resources. Searching, bibliographic information and summaries are free but with prices per article ranging from $1 to $4 of a monthly subscription for 50 documents from a 880 journal subset. Highlights weaknesses with the software
  14. Internet searching and indexing : the subject approach (2000) 0.01
    0.012071946 = product of:
      0.060359728 = sum of:
        0.060359728 = weight(_text_:bibliographic in 1468) [ClassicSimilarity], result of:
          0.060359728 = score(doc=1468,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 1468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=1468)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: International cataloguing and bibliographic control 30(2001) no.3, S.59 (I.C. McIlwaine)
  15. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.01
    0.01108232 = product of:
      0.055411596 = sum of:
        0.055411596 = sum of:
          0.024887787 = weight(_text_:data in 2117) [ClassicSimilarity], result of:
            0.024887787 = score(doc=2117,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.17468026 = fieldWeight in 2117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2117)
          0.030523809 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
            0.030523809 = score(doc=2117,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 2117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2117)
      0.2 = coord(1/5)
    
    Abstract
    This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based an actual interaction with the search engines. User evaluation was based an 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic an all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search Interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1 relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions an all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based an these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback an strengths and weaknesses of search engines for system improvement
    Date
    24. 1.2004 18:27:22
  16. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.01
    0.01108232 = product of:
      0.055411596 = sum of:
        0.055411596 = sum of:
          0.024887787 = weight(_text_:data in 2565) [ClassicSimilarity], result of:
            0.024887787 = score(doc=2565,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.17468026 = fieldWeight in 2565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2565)
          0.030523809 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
            0.030523809 = score(doc=2565,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 2565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2565)
      0.2 = coord(1/5)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  17. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.01
    0.0103601245 = product of:
      0.05180062 = sum of:
        0.05180062 = product of:
          0.10360124 = sum of:
            0.10360124 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.10360124 = score(doc=3227,freq=4.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  18. Höfer, W.: Detektive im Web (1999) 0.01
    0.0103601245 = product of:
      0.05180062 = sum of:
        0.05180062 = product of:
          0.10360124 = sum of:
            0.10360124 = weight(_text_:22 in 4007) [ClassicSimilarity], result of:
              0.10360124 = score(doc=4007,freq=4.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.6565931 = fieldWeight in 4007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4007)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.1999 20:22:06
  19. Rensman, J.: Blick ins Getriebe (1999) 0.01
    0.0103601245 = product of:
      0.05180062 = sum of:
        0.05180062 = product of:
          0.10360124 = sum of:
            0.10360124 = weight(_text_:22 in 4009) [ClassicSimilarity], result of:
              0.10360124 = score(doc=4009,freq=4.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.6565931 = fieldWeight in 4009, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4009)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.1999 21:22:59
  20. Stock, M.; Stock, W.G.: Recherchieren im Internet (2004) 0.01
    0.009767618 = product of:
      0.04883809 = sum of:
        0.04883809 = product of:
          0.09767618 = sum of:
            0.09767618 = weight(_text_:22 in 4686) [ClassicSimilarity], result of:
              0.09767618 = score(doc=4686,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.61904186 = fieldWeight in 4686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4686)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27.11.2005 18:04:22

Years

Languages

  • e 137
  • d 87
  • nl 2
  • f 1
  • sp 1
  • More… Less…

Types

  • a 196
  • el 23
  • m 15
  • s 3
  • x 3
  • p 2
  • r 1
  • More… Less…