Search (4 results, page 1 of 1)

  • × classification_ss:"025.04 / DDC22ger"
  • × language_ss:"e"
  1. Lavrenko, V.: ¬A generative theory of relevance (2009) 0.01
    0.0052794483 = product of:
      0.021117793 = sum of:
        0.021117793 = weight(_text_:data in 3306) [ClassicSimilarity], result of:
          0.021117793 = score(doc=3306,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.17468026 = fieldWeight in 3306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3306)
      0.25 = coord(1/4)
    
    Abstract
    A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
  2. Concepts in Context : Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010 (2011) 0.00
    0.0032375087 = product of:
      0.012950035 = sum of:
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
              0.02590007 = score(doc=628,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=628)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2013 11:34:18
  3. ¬Die Macht der Suchmaschinen (2007) 0.00
    0.0031676688 = product of:
      0.012670675 = sum of:
        0.012670675 = weight(_text_:data in 1813) [ClassicSimilarity], result of:
          0.012670675 = score(doc=1813,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.10480815 = fieldWeight in 1813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1813)
      0.25 = coord(1/4)
    
    Content
    MARCEL MACHILL / MARKUS BEILER / MARTIN ZENKER: Suchmaschinenforschung. Überblick und Systematisierung eines interdisziplinären Forschungsfeldes TEIL 1: SUCHMASCHINENREGULIERUNG UND -ÖKONOMIE URS GASSER / JAMES THURMAN: Themen und Herausforderungen der Regulierung von Suchmaschinen NORBERT SCHNEIDER: Die Notwendigkeit der Suchmaschinenregulierung aus Sicht eines Regulierers WOLFGANG SCHULZ / THORSTEN HELD: Der Index auf dem Index? Selbstzensur und Zensur bei Suchmaschinen BORIS ROTENBERG: Towards Personalised Search: EU Data Protection Law and its Implications for Media Pluralism ELIZABETH VAN COUVERING: The Economy of Navigation: Search Engines, Search Optimisation and Search Results THEO RÖHLE: Machtkonzepte in der Suchmaschinenforschung TEIL 2: SUCHMASCHINEN UND JOURNALISMUS VINZENZ WYSS / GUIDO KEEL: Google als Trojanisches Pferd? Konsequenzen der Internet-Recherche von Journalisten für die journalistische Qualität NIC NEWMAN: Search Strategies and Activities of BBC News Interactive JÖRG SADROZINSKI: Suchmaschinen und öffentlich-rechtlicher Onlinejournalismus am Beispiel tagesschau.de HELMUT MARTIN-JUNG: Suchmaschinen und Qualitätsjournalismus PHILIP GRAF DÖNHOFF / CHRISTIAN BARTELS: Online-Recherche bei NETZEITUNG.DE SUSAN KEITH: Searching for News Headlines: Connections between Unresolved Hyperlinking Issues and a New Battle over Copyright Online AXEL BUNDENTHAL: Suchmaschinen als Herausforderung für Archive und Dokumentationsbereiche am Beispiel des ZDF BENJAMIN PETERS: The Search Engine Democracy: Metaphors and Muhammad
  4. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.00
    0.002590007 = product of:
      0.010360028 = sum of:
        0.010360028 = product of:
          0.020720055 = sum of:
            0.020720055 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.020720055 = score(doc=2428,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    

Types