Search (162 results, page 1 of 9)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.35
    0.34722906 = product of:
      0.6944581 = sum of:
        0.09920831 = product of:
          0.29762492 = sum of:
            0.29762492 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.29762492 = score(doc=1826,freq=2.0), product of:
                0.3177388 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03747799 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.29762492 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29762492 = score(doc=1826,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.29762492 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29762492 = score(doc=1826,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(3/6)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.28
    0.27778324 = product of:
      0.5555665 = sum of:
        0.07936664 = product of:
          0.23809992 = sum of:
            0.23809992 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.23809992 = score(doc=230,freq=2.0), product of:
                0.3177388 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03747799 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.23809992 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23809992 = score(doc=230,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23809992 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23809992 = score(doc=230,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5 = coord(3/6)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.04
    0.043203976 = product of:
      0.12961192 = sum of:
        0.121149 = weight(_text_:ranking in 2565) [ClassicSimilarity], result of:
          0.121149 = score(doc=2565,freq=8.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5976189 = fieldWeight in 2565, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.008462917 = product of:
          0.025388751 = sum of:
            0.025388751 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.025388751 = score(doc=2565,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  4. Lewandowski, D.: How can library materials be ranked in the OPAC? (2009) 0.03
    0.03348382 = product of:
      0.2009029 = sum of:
        0.2009029 = weight(_text_:ranking in 2810) [ClassicSimilarity], result of:
          0.2009029 = score(doc=2810,freq=22.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.9910388 = fieldWeight in 2810, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2810)
      0.16666667 = coord(1/6)
    
    Abstract
    Some Online Public Access Catalogues offer a ranking component. However, ranking there is merely text-based and is doomed to fail due to limited text in bibliographic data. The main assumption for the talk is that we are in a situation where the appropriate ranking factors for OPACs should be defined, while the implementation is no major problem. We must define what we want, and not so much focus on the technical work. Some deep thinking is necessary on the "perfect results set" and how we can achieve it through ranking. The talk presents a set of potential ranking factors and clustering possibilities for further discussion. A look at commercial Web search engines could provide us with ideas how ranking can be improved with additional factors. Search engines are way beyond pure text-based ranking and apply ranking factors in the groups like popularity, freshness, personalisation, etc. The talk describes the main factors used in search engines and how derivatives of these could be used for libraries' purposes. The goal of ranking is to provide the user with the best-suitable results on top of the results list. How can this goal be achieved with the library catalogue and also concerning the library's different collections and databases? The assumption is that ranking of such materials is a complex problem and is yet nowhere near solved. Libraries should focus on ranking to improve user experience.
  5. Page, L.; Brin, S.; Motwani, R.; Winograd, T.: ¬The PageRank citation ranking : Bringing order to the Web (1999) 0.03
    0.028268103 = product of:
      0.16960861 = sum of:
        0.16960861 = weight(_text_:ranking in 496) [ClassicSimilarity], result of:
          0.16960861 = score(doc=496,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.8366664 = fieldWeight in 496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.109375 = fieldNorm(doc=496)
      0.16666667 = coord(1/6)
    
  6. Hummingbird Neuer Suchalgorithmus bei Google (2013) 0.03
    0.026476588 = product of:
      0.15885952 = sum of:
        0.15885952 = weight(_text_:suchmaschine in 2520) [ClassicSimilarity], result of:
          0.15885952 = score(doc=2520,freq=8.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.7496553 = fieldWeight in 2520, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.046875 = fieldNorm(doc=2520)
      0.16666667 = coord(1/6)
    
    Abstract
    Google hat mit "Hummingbird" einen neuen Suchalgorithmus entwickelt und bereits eingeführt. Dabei handelt es sich laut Google um eine der größten Veränderungen der Suchmaschine, die rund 90 Prozent aller Suchanfragen betrifft. Im Rahmen einer kleinen Veranstaltung zum 15. Geburtstag der Suchmaschine hat Google in die Garage geladen, in der das Unternehmen gegründet wurde. Dabei enthüllte Google eine der bisher größten Veränderungen an der Suchmaschine: Ohne dass Nutzer etwas davon mitbekamen, hat Google vor rund einem Monat seinen Suchalgorithmus ausgetauscht. Der neue Suchalgorithmus mit Codenamen "Hummingbird" soll es Google ermöglichen, Suchanfragen und Beziehungen zwischen Dingen besser zu verstehen. Das soll die Suchmaschine in die Lage versetzen, komplexere Suchanfragen zu verarbeiten, die von Nutzern immer häufiger gestellt werden - auch, weil immer mehr Nutzer Google auf dem Smartphone per Spracheingabe nutzen. Früher versuchte Google lediglich, die Schlüsselwörter in einer Suchanfrage in Webseiten wiederzufinden. Doch seit geraumer Zeit arbeitet Google daran, die Suchanfragen besser zu verstehen, um bessere Suchergebnisse anzuzeigen.
  7. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.02
    0.023012474 = product of:
      0.06903742 = sum of:
        0.0605745 = weight(_text_:ranking in 2564) [ClassicSimilarity], result of:
          0.0605745 = score(doc=2564,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.008462917 = product of:
          0.025388751 = sum of:
            0.025388751 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.025388751 = score(doc=2564,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  8. Robertson, S.: ¬The state of information retrieval : a researcher's view 0.02
    0.016153201 = product of:
      0.0969192 = sum of:
        0.0969192 = weight(_text_:ranking in 1944) [ClassicSimilarity], result of:
          0.0969192 = score(doc=1944,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.47809508 = fieldWeight in 1944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=1944)
      0.16666667 = coord(1/6)
    
    Abstract
    For the last ten years Stephen Robertson has been a researcher at the Microsoft Research Laboratory. He previously spent twenty years at City University, where he started the Centre for Interactive Systems Research and still retains a part-time professorship. His work on probabilistic theory underpins the algorithms behind every serious search engine today. In his talk, he gave a non-technical overview of some current concerns of core IR research, in particular on the use of different kinds of evidence in searching and ranking.
  9. Janée, G.; Frew, J.; Hill, L.L.: Issues in georeferenced digital libraries (2004) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 1165) [ClassicSimilarity], result of:
          0.084804304 = score(doc=1165,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 1165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1165)
      0.16666667 = coord(1/6)
    
    Abstract
    Based on a decade's experience with the Alexandria Digital Library Project, seven issues are presented that arise in creating georeferenced digital libraries, and that appear to be intrinsic to the problem of creating any library-like information system that operates on georeferenced and geospatial resources. The first and foremost issue is providing discovery of georeferenced resources. Related to discovery are the issues of gazetteer integration and specialized ranking of search results. Strong data typing and scalability are implementation issues. Providing spatial context is a critical user interface issue. Finally, sophisticated resource access mechanisms are necessary to operate on geospatial resources.
  10. Hoffmann, P.; Médini and , L.; Ghodous, P.: Using context to improve semantic interoperability (2006) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 4434) [ClassicSimilarity], result of:
          0.084804304 = score(doc=4434,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents an approach to enhance interoperability between heterogeneous ontologies. It consists in adapting the ranking of concepts to the final users and their work context. The computations are based on an upper domain ontology, a task hierarchy and a user profile. As prerequisites, OWL ontologie have to be given, and an articulation ontology has to be built.
  11. Carrière, J.; Kazman, R.: WebQuery : searching and visualizing the Web through connectivity (1996) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 2676) [ClassicSimilarity], result of:
          0.0726894 = score(doc=2676,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 2676, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2676)
      0.16666667 = coord(1/6)
    
    Abstract
    Finding information located somewhere on the WWW is an error-prone and frustrating task. The WebQuey system offers a powerful new method for searching the Web based on connectivity and content. We do this by examining links among the nodes returned in a keyword-based query. We then rank the nodes, giving the highest rank to the most highly connected nodes. By doing so, we are finding 'hot spots' on the Web that contain onformation germane to a user's query. WebQuery not only ranks and filters the results of a Web query, it also extends the result set beyond what the search engine retrieves, by finding 'interesting' sites that are hoghly connected to those sites returned by the original query. Even with WebQuery filtering and ranking query results, the result sets can be enourmous. So, wen need to visualize the returned information. We explore several techniques for visualizing this information - including cone trees, 2D graphs, 3D graphy, lists, and bullseyes - and discuss the criteria for using each of the techniques
  12. Mann, T.: ¬The changing nature of the catalog and its integration with other discovery tools. Final report. March 17, 2006. Prepared for the Library of Congress by Karen Calhoun : A critical review (2006) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 5012) [ClassicSimilarity], result of:
          0.0726894 = score(doc=5012,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 5012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=5012)
      0.16666667 = coord(1/6)
    
    Abstract
    According to the Calhoun report, library operations that are not digital, that do not result in resources that are remotely accessible, that involve professional human judgement or expertise, or that require conceptual categorization and standardization rather than relevance ranking of keywords, do not fit into its proposed "leadership" strategy. This strategy itself, however, is based on an inappropriate business model - and a misrepresentation of that business model to begin with. The Calhoun report draws unjustified conclusions about the digital age, inflates wishful thinking, fails to make critical distinctions, and disregards (as well as mischaracterizes) an alternative "niche" strategy for research libraries, to promote scholarship (rather than increase "market position"). Its recommendations to eliminate Library of Congress Subject Headings, and to use "fast turnaround" time as the "gold standard" in cataloging, are particularly unjustified, and would have serious negative consequences for the capacity of research libraries to promote scholarly research.
  13. Whitney , C.; Schiff, L.: ¬The Melvyl Recommender Project : developing library recommendation services (2006) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1173) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1173,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1173)
      0.16666667 = coord(1/6)
    
    Abstract
    Popular commercial on-line services such as Google, e-Bay, Amazon, and Netflix have evolved quickly over the last decade to help people find what they want, developing information retrieval strategies such as usefully ranked results, spelling correction, and recommender systems. Online library catalogs (OPACs), in contrast, have changed little and are notoriously difficult for patrons to use (University of California Libraries, 2005). Over the past year (June 2005 to the present), the Melvyl Recommender Project (California Digital Library, 2005) has been exploring methods and feasibility of closing the gap between features that library patrons want and have come to expect from information retrieval systems and what libraries are currently equipped to deliver. The project team conducted exploratory work in five topic areas: relevance ranking, auto-correction, use of a text-based discovery system, user interface strategies, and recommending. This article focuses specifically on the recommending portion of the project and potential extensions to that work.
  14. Schomburg, S.; Prante, J.: Search Engine Federation in Libraries - Suchmaschinenföderation in Bibliotheken (2009) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 2809) [ClassicSimilarity], result of:
          0.0726894 = score(doc=2809,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 2809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2809)
      0.16666667 = coord(1/6)
    
    Abstract
    The hbz (Academic Library Center, Cologne) has a strong focus on search engine applications: Beyond the projected integration of respective technologies into the new release of the Digital Library portal solution (DigiBib6), vascoda background services also apply and take advantage of search engine technology. Experience since 2003 has given proof that building and updating of search engine indexes involves a vast amount of resources. The use of search engine federations, however, pledges major improvements: The total amount of data records held in linked indexes can be almost unlimited but also allow for a joint output of all hits retrieved. A federation also comes with excellent response times - hits retrieved can also refer to or link into the original system's layout. Nonetheless, the major challenge these days is different search engine technologies, e.g. Lucene and FAST, the variations in terms of ranking, and the implementation or non-implementation of so-called drill-downs. The lecture is designed to give a brief insight into the hbz search engine workshop with an introduction to the special project state of play.
  15. Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1557) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1557,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1557)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
  16. Karpathy, A.; Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions (2015) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1868) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1868,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1868)
      0.16666667 = coord(1/6)
    
    Abstract
    We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations.
  17. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 133) [ClassicSimilarity], result of:
          0.0605745 = score(doc=133,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
  18. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 2861) [ClassicSimilarity], result of:
          0.0605745 = score(doc=2861,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.16666667 = coord(1/6)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  19. Austin, D.: How Google finds your needle in the Web's haystack : as we'll see, the trick is to ask the web itself to rank the importance of pages... (2006) 0.01
    0.009994283 = product of:
      0.0599657 = sum of:
        0.0599657 = weight(_text_:ranking in 93) [ClassicSimilarity], result of:
          0.0599657 = score(doc=93,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29580626 = fieldWeight in 93, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
      0.16666667 = coord(1/6)
    
    Abstract
    Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.
  20. bbu/c't: Ask Jeeves mit verbesserten Suchfunktionen (2005) 0.01
    0.008825529 = product of:
      0.052953172 = sum of:
        0.052953172 = weight(_text_:suchmaschine in 3453) [ClassicSimilarity], result of:
          0.052953172 = score(doc=3453,freq=2.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.2498851 = fieldWeight in 3453, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03125 = fieldNorm(doc=3453)
      0.16666667 = coord(1/6)
    
    Abstract
    Mit nicht völlig neuen, aber überarbeiteten Suchfunktionen erweitert das zum Firmenimperium des US-Medienzaren Barry Diller gehörende Unternehmen Ask Jeeves das Leistungsspektrum seiner Suchmaschine. Mit der Ergebnisverfeinerungsfunktion Focus erhält der Suchende auf der rechten oberen Bildschirmseite eine Liste, die das Thema seiner Suche thematisch aufgliedern soll. Eine zweite Neuerung verspricht präzise Antworten auf als Fragen formulierte Sucheinträge. So ergibt der Eintrag "Lady Diana" zum Beispiel eine Liste mit den Items Princess Di, Princess Dianas Life, Princess Diana's Wedding. Interessant dabei ist, dass diese Liste nicht einfach aus einem monolithischen Block von Schlüsselwörtern besteht, sondern in drei Kategorien aufgeteilt ist: "Narrow Your Search", "Expand Your Search" und "Related Names". Waren die eben genannten Beispiele aus der ersten Kategorie, finden sich unter Expand Your Search Einträge wie Royal Family, Princess Di Ring, Princess Di Prince Charles History oder Prince William Harry, allerdings auch Who Is Louis De Funes? "Related Names" verweist auf Einträge wie Diana Spencer, Prince Harry oder Imran Khan. Die Suchfunktion soll also die thematische Verfeinerung oder Ausweitung gleichermaßen wie die Fortsetzung der Suche mit einem verwandten Thema ermöglichen. Auf die Frage "who invented the telephone" erhält der Suchende als ersten Eintrag die Antwort "The telephone was invented by Alexander Graham Bell" mit dem roten Vermerk "Web Answer'. Bemerkenswert ist hier, dass auf eine Frage nicht nur eine passende Webseite mit der Antwort angezeigt wird, sondern die ausformulierte Antwort direkt aus der vorgeschlagenen Webseite zitiert wird. Die Frage "who is the mother of Albert Einstein" gibt immerhin einen Eintrag unter "Narrow Your Search" mit "Albert Einstein Family tree". Ask Jeeves wird wohl noch eine weitere Neuerung bevorstehen: Auf einer Pressekonferenz in San Francisco bemerkte Chief Executive Barry Diller, dass das Unternehmen über eine Namensänderung von Ask Jeeves nachdenke. Wahrscheinlich werde auf eines der beiden Worte verzichtet werden. Mit dem Sucheintrag "How will Ask Jeeves be called in the future" erhält man bislang jedoch noch keine "Web Answer". (26.05.2005 15:30)

Years

Types

  • a 71
  • i 5
  • p 2
  • r 2
  • s 2
  • m 1
  • n 1
  • x 1
  • More… Less…