Search (72 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Suchmaschinen"
  • × year_i:[2000 TO 2010}
  1. Hock, R.: Search engines (2009) 0.01
    0.006036883 = product of:
      0.024147533 = sum of:
        0.014315128 = product of:
          0.042945385 = sum of:
            0.042945385 = weight(_text_:problem in 3876) [ClassicSimilarity], result of:
              0.042945385 = score(doc=3876,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3282676 = fieldWeight in 3876, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3876)
          0.33333334 = coord(1/3)
        0.009832405 = product of:
          0.029497212 = sum of:
            0.029497212 = weight(_text_:29 in 3876) [ClassicSimilarity], result of:
              0.029497212 = score(doc=3876,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.27205724 = fieldWeight in 3876, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3876)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    This entry provides an overview of Web search engines, looking at the definition, components, leading engines, searching capabilities, and types of engines. It examines the components that make up a search engine and briefly discusses the process involved in identifying content for the engines' databases and the indexing of that content. Typical search options are reviewed and the major Web search engines are identified and described. Also identified and described are various specialty search engines, such as those for special content such as video and images, and engines that take significantly different approaches to the search problem, such as visualization engines and metasearch engines.
    Date
    27. 8.2011 14:29:48
  2. Bilal, D.: Children's use of the Yahooligans! Web search engine : III. Cognitive and physical behaviors on fully self-generated search tasks (2002) 0.01
    0.005155518 = product of:
      0.020622073 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 5228) [ClassicSimilarity], result of:
              0.03681033 = score(doc=5228,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5228)
          0.33333334 = coord(1/3)
        0.008351962 = product of:
          0.025055885 = sum of:
            0.025055885 = weight(_text_:22 in 5228) [ClassicSimilarity], result of:
              0.025055885 = score(doc=5228,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5228)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Bilal, in this third part of her Yahooligans! study looks at children's performance with self-generated search tasks, as compared to previously assigned search tasks looking for differences in success, cognitive behavior, physical behavior, and task preference. Lotus ScreenCam was used to record interactions and post search interviews to record impressions. The subjects, the same 22 seventh grade children in the previous studies, generated topics of interest that were mediated with the researcher into more specific topics where necessary. Fifteen usable sessions form the basis of the study. Eleven children were successful in finding information, a rate of 73% compared to 69% in assigned research questions, and 50% in assigned fact-finding questions. Eighty-seven percent began using one or two keyword searches. Spelling was a problem. Successful children made fewer keyword searches and the number of search moves averaged 5.5 as compared to 2.4 on the research oriented task and 3.49 on the factual. Backtracking and looping were common. The self-generated task was preferred by 47% of the subjects.
  3. Loia, V.; Pedrycz, W.; Senatore, S.; Sessa, M.I.: Web navigation support by means of proximity-driven assistant agents (2006) 0.00
    0.004296265 = product of:
      0.01718506 = sum of:
        0.010225092 = product of:
          0.030675275 = sum of:
            0.030675275 = weight(_text_:problem in 5283) [ClassicSimilarity], result of:
              0.030675275 = score(doc=5283,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23447686 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5283)
          0.33333334 = coord(1/3)
        0.0069599687 = product of:
          0.020879906 = sum of:
            0.020879906 = weight(_text_:22 in 5283) [ClassicSimilarity], result of:
              0.020879906 = score(doc=5283,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5283)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    The explosive growth of the Web and the consequent exigency of the Web personalization domain have gained a key position in the direction of customization of the Web information to the needs of specific users, taking advantage of the knowledge acquired from the analysis of the user's navigational behavior (usage data) in correlation with other information collected in the Web context, namely, structure, content, and user profile data. This work presents an agent-based framework designed to help a user in achieving personalized navigation, by recommending related documents according to the user's responses in similar-pages searching mode. Our agent-based approach is grounded in the integration of different techniques and methodologies into a unique platform featuring user profiling, fuzzy multisets, proximity-oriented fuzzy clustering, and knowledge-based discovery technologies. Each of these methodologies serves to solve one facet of the general problem (discovering documents relevant to the user by searching the Web) and is treated by specialized agents that ultimately achieve the final functionality through cooperation and task distribution.
    Date
    22. 7.2006 16:59:13
  4. Herrera-Viedma, E.; Pasi, G.: Soft approaches to information retrieval and information access on the Web : an introduction to the special topic section (2006) 0.00
    0.003437012 = product of:
      0.013748048 = sum of:
        0.008180073 = product of:
          0.02454022 = sum of:
            0.02454022 = weight(_text_:problem in 5285) [ClassicSimilarity], result of:
              0.02454022 = score(doc=5285,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.1875815 = fieldWeight in 5285, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5285)
          0.33333334 = coord(1/3)
        0.005567975 = product of:
          0.016703924 = sum of:
            0.016703924 = weight(_text_:22 in 5285) [ClassicSimilarity], result of:
              0.016703924 = score(doc=5285,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15476047 = fieldWeight in 5285, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5285)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    The World Wide Web is a popular and interactive medium used to collect, disseminate, and access an increasingly huge amount of information, which constitutes the mainstay of the so-called information and knowledge society. Because of its spectacular growth, related to both Web resources (pages, sites, and services) and number of users, the Web is nowadays the main information repository and provides some automatic systems for locating, accessing, and retrieving information. However, an open and crucial question remains: how to provide fast and effective retrieval of the information relevant to specific users' needs. This is a very hard and complex task, since it is pervaded with subjectivity, vagueness, and uncertainty. The expression soft computing refers to techniques and methodologies that work synergistically with the aim of providing flexible information processing tolerant of imprecision, vagueness, partial truth, and approximation. So, soft computing represents a good candidate to design effective systems for information access and retrieval on the Web. One of the most representative tools of soft computing is fuzzy set theory. This special topic section collects research articles witnessing some recent advances in improving the processes of information access and retrieval on the Web by using soft computing tools, and in particular, by using fuzzy sets and/or integrating them with other soft computing tools. In this introductory article, we first review the problem of Web retrieval and the concept of soft computing technology. We then briefly introduce the articles in this section and conclude by highlighting some future research directions that could benefit from the use of soft computing technologies.
    Date
    22. 7.2006 16:59:33
  5. Perez, E.: dtSearch: the little search engine that could (2004) 0.00
    0.0028092582 = product of:
      0.022474065 = sum of:
        0.022474065 = product of:
          0.067422196 = sum of:
            0.067422196 = weight(_text_:29 in 2340) [ClassicSimilarity], result of:
              0.067422196 = score(doc=2340,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.6218451 = fieldWeight in 2340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2340)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Source
    Online. 29(2005) no.1, S.28-
  6. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.00
    0.0024607205 = product of:
      0.019685764 = sum of:
        0.019685764 = product of:
          0.059057288 = sum of:
            0.059057288 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.059057288 = score(doc=4872,freq=4.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:40:22
  7. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.00
    0.002435989 = product of:
      0.019487912 = sum of:
        0.019487912 = product of:
          0.058463734 = sum of:
            0.058463734 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.058463734 = score(doc=3445,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    25. 8.2005 17:42:22
  8. Bawden, D.: Google and the universe of knowledge (2008) 0.00
    0.002435989 = product of:
      0.019487912 = sum of:
        0.019487912 = product of:
          0.058463734 = sum of:
            0.058463734 = weight(_text_:22 in 844) [ClassicSimilarity], result of:
              0.058463734 = score(doc=844,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5416616 = fieldWeight in 844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=844)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    7. 6.2008 16:22:20
  9. Sherman, C.: Reference resources on the Web (2000) 0.00
    0.0021069439 = product of:
      0.01685555 = sum of:
        0.01685555 = product of:
          0.05056665 = sum of:
            0.05056665 = weight(_text_:29 in 6869) [ClassicSimilarity], result of:
              0.05056665 = score(doc=6869,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46638384 = fieldWeight in 6869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6869)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 3.2002 17:49:46
  10. Granum, G.; Barker, P.: ¬An EASIER way to search online engineering resource (2000) 0.00
    0.0020450184 = product of:
      0.016360147 = sum of:
        0.016360147 = product of:
          0.04908044 = sum of:
            0.04908044 = weight(_text_:problem in 4876) [ClassicSimilarity], result of:
              0.04908044 = score(doc=4876,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.375163 = fieldWeight in 4876, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4876)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    EEVL consists of several distinct resources, which exist as separate databases. This article describes the approach taken to tackle a particular problem that was identified through evaluation studies, namely, that searches of the EEVL catalogue too frequently matched nor records. The solution described in this paper is a cross-search facility for 3 of the EEVL databases
  11. Price, A.: Five new Danish subject gateways under development (2000) 0.00
    0.0017399922 = product of:
      0.013919937 = sum of:
        0.013919937 = product of:
          0.04175981 = sum of:
            0.04175981 = weight(_text_:22 in 4878) [ClassicSimilarity], result of:
              0.04175981 = score(doc=4878,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.38690117 = fieldWeight in 4878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4878)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:41:31
  12. Ding, Y.; Chowdhury, G.; Foo, S.: Organsising keywords in a Web search environment : a methodology based on co-word analysis (2000) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 105) [ClassicSimilarity], result of:
              0.03681033 = score(doc=105,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=105)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The rapid development of the Internet and World Wide Web has caused some critical problem for information retrieval. Researchers have made several attempts to solve these problems. Thesauri and subject heading lists as traditional information retrieval tools have been criticised for their efficiency to tackle these newly emerging problems. This paper proposes an information retrieval tool generated by cocitation analysis, comprising keyword clusters with relationships based on the co-occurrences of keywords in the literature. Such a tool can play the role of an associative thesaurus that can provide information about the keywords in a domain that might be useful for information searching and query expansion
  13. Gorbunov, A.L.: Relevance of Web documents : ghosts consensus method (2002) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 1005) [ClassicSimilarity], result of:
              0.03681033 = score(doc=1005,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 1005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1005)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The dominant method currently used to improve the quality of Internet search systems is often called "digital democracy." Such an approach implies the utilization of the majority opinion of Internet users to determine the most relevant documents: for example, citation index usage for sorting of search results (google.com) or an enrichment of a query with terms that are asked frequently in relation with the query's theme. "Digital democracy" is an effective instrument in many cases, but it has an unavoidable shortcoming, which is a matter of principle: the average intellectual and cultural level of Internet users is very low- everyone knows what kind of information is dominant in Internet query statistics. Therefore, when one searches the Internet by means of "digital democracy" systems, one gets answers that reflect an underlying assumption that the user's mind potential is very low, and that his cultural interests are not demanding. Thus, it is more correct to use the term "digital ochlocracy" to refer to Internet search systems with "digital democracy." Based an the well-known mathematical mechanism of linear programming, we propose a method to solve the indicated problem.
  14. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4546) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4546,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4546, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4546)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
  15. Naing, M.-M.; Lim, E.-P.; Chiang, R.H.L.: Extracting link chains of relationship instances from a Web site (2006) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 6111) [ClassicSimilarity], result of:
              0.03681033 = score(doc=6111,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 6111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6111)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Web pages from a Web site can often be associated with concepts in an ontology, and pairs of Web pages also can be associated with relationships between concepts. With such associations, the Web site can be searched, browsed, or even reorganized based on the concept and relationship labels of its Web pages. In this article, we study the link chain extraction problem that is critical to the extraction of Web pages that are related. A link chain is an ordered list of anchor elements linking two Web pages related by some semantic relationship. We propose a link chain extraction method that derives extraction rules for identifying the anchor elements forming the link chains. We applied the proposed method to two well-structured Web sites and found that its performance in terms of precision and recall is good, even with a small number of training examples.
  16. Bar-Ilan, J.: Methods for measuring search engine performance over time (2002) 0.00
    0.0014046291 = product of:
      0.011237033 = sum of:
        0.011237033 = product of:
          0.033711098 = sum of:
            0.033711098 = weight(_text_:29 in 305) [ClassicSimilarity], result of:
              0.033711098 = score(doc=305,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.31092256 = fieldWeight in 305, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=305)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    23. 3.2002 9:50:29
  17. Anderson, R.: ¬The (uncertain) future of libraries in a Google world : sounding an alarm (2005) 0.00
    0.0014046291 = product of:
      0.011237033 = sum of:
        0.011237033 = product of:
          0.033711098 = sum of:
            0.033711098 = weight(_text_:29 in 308) [ClassicSimilarity], result of:
              0.033711098 = score(doc=308,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.31092256 = fieldWeight in 308, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=308)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Pages
    S.29-36
  18. Carroll, N.: Search engine optimization (2009) 0.00
    0.0014046291 = product of:
      0.011237033 = sum of:
        0.011237033 = product of:
          0.033711098 = sum of:
            0.033711098 = weight(_text_:29 in 3874) [ClassicSimilarity], result of:
              0.033711098 = score(doc=3874,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.31092256 = fieldWeight in 3874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3874)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    27. 8.2011 14:29:37
  19. Gardner, T.; Iannella, R.: Architecture and software solutions (2000) 0.00
    0.0013919937 = product of:
      0.01113595 = sum of:
        0.01113595 = product of:
          0.03340785 = sum of:
            0.03340785 = weight(_text_:22 in 4867) [ClassicSimilarity], result of:
              0.03340785 = score(doc=4867,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 4867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4867)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:38:24
  20. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.00
    0.0013919937 = product of:
      0.01113595 = sum of:
        0.01113595 = product of:
          0.03340785 = sum of:
            0.03340785 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.03340785 = score(doc=4869,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:39:23

Types

  • a 63
  • m 6
  • el 4
  • s 1
  • x 1
  • More… Less…