Search (44 results, page 1 of 3)

  • × theme_ss:"Retrievalalgorithmen"
  1. Joss, M.W.; Wszola, S.: ¬The engines that can : text search and retrieval software, their strategies, and vendors (1996) 0.07
    0.06538387 = product of:
      0.13076773 = sum of:
        0.051176272 = weight(_text_:reference in 5123) [ClassicSimilarity], result of:
          0.051176272 = score(doc=5123,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.2696973 = fieldWeight in 5123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=5123)
        0.07959145 = sum of:
          0.041675847 = weight(_text_:services in 5123) [ClassicSimilarity], result of:
            0.041675847 = score(doc=5123,freq=2.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.2433798 = fieldWeight in 5123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.046875 = fieldNorm(doc=5123)
          0.037915602 = weight(_text_:22 in 5123) [ClassicSimilarity], result of:
            0.037915602 = score(doc=5123,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.23214069 = fieldWeight in 5123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5123)
      0.5 = coord(2/4)
    
    Abstract
    Traces the development of text searching and retrieval software designed to cope with the increasing demands made by the storage and handling of large amounts of data, recorded on high data storage media, from CD-ROM to multi gigabyte storage media and online information services, with particular reference to the need to cope with graphics as well as conventional ASCII text. Includes details of: Boolean searching, fuzzy searching and matching; relevance ranking; proximity searching and improved strategies for dealing with text searching in very large databases. Concludes that the best searching tools for CD-ROM publishers are those optimized for searching and retrieval on CD-ROM. CD-ROM drives have relatively lower random seek times than hard discs and so the software most appropriate to the medium is that which can effectively arrange the indexes and text on the CD-ROM to avoid continuous random access searching. Lists and reviews a selection of software packages designed to achieve the sort of results required for rapid CD-ROM searching
    Date
    12. 9.1996 13:56:22
  2. Witschel, H.F.: Global term weights in distributed environments (2008) 0.05
    0.053798858 = product of:
      0.107597716 = sum of:
        0.088639915 = weight(_text_:reference in 2096) [ClassicSimilarity], result of:
          0.088639915 = score(doc=2096,freq=6.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.4671295 = fieldWeight in 2096, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.018957801 = product of:
          0.037915602 = sum of:
            0.037915602 = weight(_text_:22 in 2096) [ClassicSimilarity], result of:
              0.037915602 = score(doc=2096,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.23214069 = fieldWeight in 2096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2096)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper examines the estimation of global term weights (such as IDF) in information retrieval scenarios where a global view on the collection is not available. In particular, the two options of either sampling documents or of using a reference corpus independent of the target retrieval collection are compared using standard IR test collections. In addition, the possibility of pruning term lists based on frequency is evaluated. The results show that very good retrieval performance can be reached when just the most frequent terms of a collection - an "extended stop word list" - are known and all terms which are not in that list are treated equally. However, the list cannot always be fully estimated from a general-purpose reference corpus, but some "domain-specific stop words" need to be added. A good solution for achieving this is to mix estimates from small samples of the target retrieval collection with ones derived from a reference corpus.
    Date
    1. 8.2008 9:44:22
  3. Bauckhage, C.: Marginalizing over the PageRank damping factor (2014) 0.02
    0.02132345 = product of:
      0.0852938 = sum of:
        0.0852938 = weight(_text_:reference in 928) [ClassicSimilarity], result of:
          0.0852938 = score(doc=928,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.44949555 = fieldWeight in 928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.078125 = fieldNorm(doc=928)
      0.25 = coord(1/4)
    
    Abstract
    In this note, we show how to marginalize over the damping parameter of the PageRank equation so as to obtain a parameter-free version known as TotalRank. Our discussion is meant as a reference and intended to provide a guided tour towards an interesting result that has applications in information retrieval and classification.
  4. Jones, G.; Robertson, A.M.; Willett, P.: ¬An introduction to genetic algorithms and to their use in information retrieval (1994) 0.02
    0.01705876 = product of:
      0.06823504 = sum of:
        0.06823504 = weight(_text_:reference in 7415) [ClassicSimilarity], result of:
          0.06823504 = score(doc=7415,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.35959643 = fieldWeight in 7415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
      0.25 = coord(1/4)
    
    Abstract
    This paper provides an introduction to genetic algorithms, a new approach to the investigation of computationally-intensive problems that may be insoluble using conventional, deterministic approaches. A genetic algorithm takes an initial set of possible starting solutions and then iteratively improves theses solutions using operators that are analogous to those involved in Darwinian evolution. The approach is illusrated by reference to several problems in information retrieval
  5. Pfeifer, U.; Pennekamp, S.: Incremental processing of vague queries in interactive retrieval systems (1997) 0.02
    0.01705876 = product of:
      0.06823504 = sum of:
        0.06823504 = weight(_text_:reference in 735) [ClassicSimilarity], result of:
          0.06823504 = score(doc=735,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.35959643 = fieldWeight in 735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=735)
      0.25 = coord(1/4)
    
    Abstract
    The application of information retrieval techniques in interactive environments requires systems capable of effeciently processing vague queries. To reach reasonable response times, new data structures and algorithms have to be developed. In this paper we describe an approach taking advantage of the conditions of interactive usage and special access paths. To have a reference we investigate text queries and compared our algorithms to the well known 'Buckley/Lewit' algorithm. We achieved significant improvements for the response times
  6. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.02
    0.01705876 = product of:
      0.06823504 = sum of:
        0.06823504 = weight(_text_:reference in 2564) [ClassicSimilarity], result of:
          0.06823504 = score(doc=2564,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.35959643 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=2564)
      0.25 = coord(1/4)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
  7. Watters, C.; Amoudi, A.: Geosearcher : location-based ranking of search engine results (2003) 0.02
    0.015077956 = product of:
      0.060311824 = sum of:
        0.060311824 = weight(_text_:reference in 5152) [ClassicSimilarity], result of:
          0.060311824 = score(doc=5152,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31784135 = fieldWeight in 5152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5152)
      0.25 = coord(1/4)
    
    Abstract
    Waters and Amoudi describe GeoSearcher, a prototype ranking program that arranges search engine results along a geo-spatial dimension without the provision of geo-spatial meta-tags or the use of geo-spatial feature extraction. GeoSearcher uses URL analysis, IptoLL, Whois, and the Getty Thesaurus of Geographic Names to determine site location. It accepts the first 200 sites returned by a search engine, identifies the coordinates, calculates their distance from a reference point and ranks in ascending order by this value. For any retrieved site the system checks if it has already been located in the current session, then sends the domain name to Whois to generate a return of a two letter country code and an area code. With no success the name is stripped one level and resent. If this fails the top level domain is tested for being a country code. Any remaining unmatched names go to IptoLL. Distance is calculated using the center point of the geographic area and a provided reference location. A test run on a set of 100 URLs from a search was successful in locating 90 sites. Eighty three pages could be manually found and 68 had sufficient information to verify location determination. Of these 65 ( 95%) had been assigned reasonably correct geographic locations. A random set of URLs used instead of a search result, yielded 80% success.
  8. Van der Veer Martens, B.; Fleet, C. van: Opening the black box of "relevance work" : a domain analysis (2012) 0.01
    0.012794068 = product of:
      0.051176272 = sum of:
        0.051176272 = weight(_text_:reference in 247) [ClassicSimilarity], result of:
          0.051176272 = score(doc=247,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.2696973 = fieldWeight in 247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=247)
      0.25 = coord(1/4)
    
    Abstract
    In response to Hjørland's recent call for a reconceptualization of the foundations of relevance, we suggest that the sociocognitive aspects of intermediation by information agencies, such as archives and libraries, are a necessary and unexplored part of the infrastructure of the subject knowledge domains central to his recommended "view of relevance informed by a social paradigm" (2010, p. 217). From a comparative analysis of documents from 39 graduate-level introductory courses in archives, reference, and strategic/competitive intelligence taught in 13 American Library Association-accredited library and information science (LIS) programs, we identify four defining sociocognitive dimensions of "relevance work" in information agencies within Hjørland's proposed framework for relevance: tasks, time, systems, and assessors. This study is intended to supply sociocognitive content from within the relevance work domain to support further domain analytic research, and to emphasize the importance of intermediary relevance work for all subject knowledge domains.
  9. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.012638534 = product of:
      0.050554138 = sum of:
        0.050554138 = product of:
          0.101108275 = sum of:
            0.101108275 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.101108275 = score(doc=402,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  10. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.011058717 = product of:
      0.04423487 = sum of:
        0.04423487 = product of:
          0.08846974 = sum of:
            0.08846974 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.08846974 = score(doc=2134,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:32:22
  11. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.01
    0.011058717 = product of:
      0.04423487 = sum of:
        0.04423487 = product of:
          0.08846974 = sum of:
            0.08846974 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.08846974 = score(doc=3445,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    25. 8.2005 17:42:22
  12. Jacso, P.: Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster (2008) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 5586) [ClassicSimilarity], result of:
          0.0426469 = score(doc=5586,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 5586, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5586)
      0.25 = coord(1/4)
    
    Abstract
    This paper focuses on the practical limitations in the content and software of the databases that are used to calculate the h-index for assessing the publishing productivity and impact of researchers. To celebrate F. W. Lancaster's biological age of seventy-five, and "scientific age" of forty-five, this paper discusses the related features of Google Scholar, Scopus, and Web of Science (WoS), and demonstrates in the latter how a much more realistic and fair h-index can be computed for F. W. Lancaster than the one produced automatically. Browsing and searching the cited reference index of the 1945-2007 edition of WoS, which in my estimate has over a hundred million "orphan references" that have no counterpart master records to be attached to, and "stray references" that cite papers which do have master records but cannot be identified by the matching algorithm because of errors of omission and commission in the references of the citing works, can bring up hundreds of additional cited references given to works of an accomplished author but are ignored in the automatic process of calculating the h-index. The partially manual process doubled the h-index value for F. W. Lancaster from 13 to 26, which is a much more realistic value for an information scientist and professor of his stature.
  13. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.009478901 = product of:
      0.037915602 = sum of:
        0.037915602 = product of:
          0.075831205 = sum of:
            0.075831205 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.075831205 = score(doc=58,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:44
  14. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.009478901 = product of:
      0.037915602 = sum of:
        0.037915602 = product of:
          0.075831205 = sum of:
            0.075831205 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.075831205 = score(doc=2051,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:56
  15. Henzinger, M.R.: Link analysis in Web information retrieval (2000) 0.01
    0.00852938 = product of:
      0.03411752 = sum of:
        0.03411752 = weight(_text_:reference in 801) [ClassicSimilarity], result of:
          0.03411752 = score(doc=801,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.17979822 = fieldWeight in 801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=801)
      0.25 = coord(1/4)
    
    Content
    The goal of information retrieval is to find all documents relevant for a user query in a collection of documents. Decades of research in information retrieval were successful in developing and refining techniques that are solely word-based (see e.g., [2]). With the advent of the web new sources of information became available, one of them being the hyperlinks between documents and records of user behavior. To be precise, hypertexts (i.e., collections of documents connected by hyperlinks) have existed and have been studied for a long time. What was new was the large number of hyperlinks created by independent individuals. Hyperlinks provide a valuable source of information for web information retrieval as we will show in this article. This area of information retrieval is commonly called link analysis. Why would one expect hyperlinks to be useful? Ahyperlink is a reference of a web page B that is contained in a web page A. When the hyperlink is clicked on in a web browser, the browser displays page B. This functionality alone is not helpful for web information retrieval. However, the way hyperlinks are typically used by authors of web pages can give them valuable information content. Typically, authors create links because they think they will be useful for the readers of the pages. Thus, links are usually either navigational aids that, for example, bring the reader back to the homepage of the site, or links that point to pages whose content augments the content of the current page. The second kind of links tend to point to high-quality pages that might be on the same topic as the page containing the link.
  16. Henzinger, M.R.: Hyperlink analysis for the Web (2001) 0.01
    0.00852938 = product of:
      0.03411752 = sum of:
        0.03411752 = weight(_text_:reference in 8) [ClassicSimilarity], result of:
          0.03411752 = score(doc=8,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.17979822 = fieldWeight in 8, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=8)
      0.25 = coord(1/4)
    
    Content
    Information retrieval is a computer science subfield whose goal is to find all documents relevant to a user query in a given collection of documents. As such, information retrieval should really be called document retrieval. Before the advent of the Web, IR systems were typically installed in libraries for use mostly by reference librarians. The retrieval algorithm for these systems was usually based exclusively on analysis of the words in the document. The Web changed all this. Now each Web user has access to various search engines whose retrieval algorithms often use not only the words in the documents but also information like the hyperlink structure of the Web or markup language tags. How are hyperlinks useful? The hyperlink functionality alone-that is, the hyperlink to Web page B that is contained in Web page A-is not directly useful in information retrieval. However, the way Web page authors use hyperlinks can give them valuable information content. Authors usually create hyperlinks they think will be useful to readers. Some may be navigational aids that, for example, take the reader back to the site's home page; others provide access to documents that augment the content of the current page. The latter tend to point to highquality pages that might be on the same topic as the page containing the hyperlink. Web information retrieval systems can exploit this information to refine searches for relevant documents. Hyperlink analysis significantly improves the relevance of the search results, so much so that all major Web search engines claim to use some type of hyperlink analysis. However, the search engines do not disclose details about the type of hyperlink analysis they perform- mostly to avoid manipulation of search results by Web-positioning companies. In this article, I discuss how hyperlink analysis can be applied to ranking algorithms, and survey other ways Web search engines can use this analysis.
  17. Chang, R.: Keyword searching and indexing (1993) 0.01
    0.006945974 = product of:
      0.027783897 = sum of:
        0.027783897 = product of:
          0.055567794 = sum of:
            0.055567794 = weight(_text_:services in 7223) [ClassicSimilarity], result of:
              0.055567794 = score(doc=7223,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.3245064 = fieldWeight in 7223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7223)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Technical services quarterly. 10(1993) no.4, S.75-86
  18. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.006319267 = product of:
      0.025277069 = sum of:
        0.025277069 = product of:
          0.050554138 = sum of:
            0.050554138 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.050554138 = score(doc=5108,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2007 18:30:22
  19. Faloutsos, C.: Signature files (1992) 0.01
    0.006319267 = product of:
      0.025277069 = sum of:
        0.025277069 = product of:
          0.050554138 = sum of:
            0.050554138 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.050554138 = score(doc=3499,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    7. 5.1999 15:22:48
  20. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.01
    0.006319267 = product of:
      0.025277069 = sum of:
        0.025277069 = product of:
          0.050554138 = sum of:
            0.050554138 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.050554138 = score(doc=1422,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2003 19:27:23

Years

Languages

  • e 39
  • d 5

Types

  • a 41
  • m 2
  • el 1
  • r 1
  • More… Less…