Search (17 results, page 1 of 1)

  • × author_ss:"Mayr, P."
  1. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.02
    0.015834149 = product of:
      0.03958537 = sum of:
        0.027959513 = weight(_text_:system in 2323) [ClassicSimilarity], result of:
          0.027959513 = score(doc=2323,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 2323) [ClassicSimilarity], result of:
              0.034877572 = score(doc=2323,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 2323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2323)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The German Federal Ministry for Education and Research funded a major terminology mapping initiative, which found its conclusion in 2007. The task of this terminology mapping initiative was to organize, create and manage 'cross-concordances' between controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. 64 crosswalks with more than 500,000 relations were established. In the final phase of the project, a major evaluation effort to test and measure the effectiveness of the vocabulary mappings in an information system environment was conducted. The paper reports on the cross-concordance work and evaluation results.
    Date
    26.12.2011 13:33:29
  2. Mayr, P.; Walter, A.-K.: Abdeckung und Aktualität des Suchdienstes Google Scholar (2006) 0.01
    0.0107641 = product of:
      0.0538205 = sum of:
        0.0538205 = weight(_text_:index in 5131) [ClassicSimilarity], result of:
          0.0538205 = score(doc=5131,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.28967714 = fieldWeight in 5131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5131)
      0.2 = coord(1/5)
    
    Abstract
    Der Beitrag widmet sich dem neuen Google-Suchdienst Google Scholar. Die Suchmaschine, die ausschließlich wissenschaftliche Dokumente durchsuchen soll, wird mit ihren wichtigsten Funktionen beschrieben und anschließend einem empirischen Test unterzogen. Die durchgeführte Studie basiert auf drei Zeitschriftenlisten: Zeitschriften von Thomson Scientific, Open AccessZeitschriften des Verzeichnisses DOAJ und in der Fachdatenbank SOLIS ausgewertete sozialwissenschaftliche Zeitschriften. Die Abdeckung dieser Zeitschriften durch Google Scholar wurde per Abfrage der Zeitschriftentitel überprüft. Die Studie zeigt Defizite in der Abdeckung und Aktualität des Google Scholarlndex. Weiterhin macht die Studie deutlich, wer die wichtigsten Datenlieferanten für den neuen Suchdienst sind und welche wissenschaftlichen Informationsquellen im Index repräsentiert sind. Die Pluspunkte von Google Scholar liegen in seiner Einfachheit, seiner Suchgeschwindigkeit und letztendlich seiner Kostenfreiheit. Die Recherche in Fachdatenbanken kann Google Scholar trotz sichtbarer Potenziale (z. B. Zitationsanalyse) aber heute aufgrund mangelnder fachlicher Abdeckung und Transparenz nicht ersetzen.
  3. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.01
    0.009683615 = product of:
      0.04841807 = sum of:
        0.04841807 = weight(_text_:context in 649) [ClassicSimilarity], result of:
          0.04841807 = score(doc=649,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.2 = coord(1/5)
    
    Source
    Concepts in context: Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010. Eds.: F. Boteram, W. Gödert u. J. Hubrich
  4. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.01
    0.009683615 = product of:
      0.04841807 = sum of:
        0.04841807 = weight(_text_:context in 38) [ClassicSimilarity], result of:
          0.04841807 = score(doc=38,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.2 = coord(1/5)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
  5. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.01
    0.008970084 = product of:
      0.044850416 = sum of:
        0.044850416 = weight(_text_:index in 3752) [ClassicSimilarity], result of:
          0.044850416 = score(doc=3752,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.24139762 = fieldWeight in 3752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.2 = coord(1/5)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  6. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.01
    0.008970084 = product of:
      0.044850416 = sum of:
        0.044850416 = weight(_text_:index in 2580) [ClassicSimilarity], result of:
          0.044850416 = score(doc=2580,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.24139762 = fieldWeight in 2580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2580)
      0.2 = coord(1/5)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the deep web. In addition, we bring a new concept into the discussion, the academic invisible web (AIW). We define the academic invisible web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the invisible web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the invisible web. Findings: Bergman's size estimate of the invisible web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the academic invisible web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the academic invisible web.
  7. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.01
    0.008069678 = product of:
      0.040348392 = sum of:
        0.040348392 = weight(_text_:context in 1909) [ClassicSimilarity], result of:
          0.040348392 = score(doc=1909,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
      0.2 = coord(1/5)
    
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  8. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 3392) [ClassicSimilarity], result of:
          0.027959513 = score(doc=3392,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 3392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3392)
      0.2 = coord(1/5)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
  9. Mayr, P.: Google Scholar als akademische Suchmaschine (2009) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 3023) [ClassicSimilarity], result of:
          0.018639674 = score(doc=3023,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 3023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3023)
      0.2 = coord(1/5)
    
    Abstract
    Die drei oben genannten Suchsysteme (Google, GS und WoS) unterscheiden sich in mehrerlei Hinsicht fundamental und eignen sich daher gut, um in die Grundthematik dieses Artikels einzuleiten. Die obigen Suchsysteme erschließen zunächst unterschiedliche Suchräume, und dies auf sehr spezifische Weise. Während Google frei zugängliche und über Hyperlink adressierbare Dokumente im Internet erfasst, gehen die beiden akademischen Suchsysteme deutlich selektiver bei der Inhaltserschließung vor. Google Scholar erfasst neben frei zugänglichen elektronischen Publikationstypen im Internet hauptsächlich wissenschaftliche Dokumente, die direkt von den akademischen Verlagen bezogen werden. Das WoS, das auf den unterschiedlichen bibliographischen Datenbanken und Zitationsindizes des ehemaligen "Institute for Scientific Information" (ISI) basiert, selektiert gegenüber den rein automatischen brute-force-Ansätzen der Internetsuchmaschine über einen qualitativen Ansatz. In den Datenbanken des WoS werden ausschließlich internationale Fachzeitschriften erfasst, die ein kontrolliertes Peer-Review durchlaufen. Insgesamt werden ca. 12.000 Zeitschriften ausgewertet und über die Datenbank verfügbar gemacht. Wie bereits erwähnt, spielt neben der Abgrenzung der Suchräume und Dokumenttypen die Zugänglichkeit und Relevanz der Dokumente eine entscheidende Bedeutung für den Benutzer. Die neueren technologischen Entwicklungen des Web Information Retrieval (IR), wie sie Google oder GS implementieren, werten insbesondere frei zugängliche Dokumente mit ihrer gesamten Text- und Linkinformation automatisch aus. Diese Verfahren sind vor allem deshalb erfolgreich, weil sie Ergebnislisten nach Relevanz gerankt darstellen, einfach und schnell zu recherchieren sind und direkt auf die Volltexte verweisen. Die qualitativen Verfahren der traditionellen Informationsanbieter (z. B. WoS) hingegen zeigen genau bei diesen Punkten (Ranking, Einfachheit und Volltextzugriff) Schwächen, überzeugen aber vor allem durch ihre Stringenz, in diesem Fall die selektive Aufnahme von qualitätsgeprüften Dokumenten in das System und die inhaltliche Erschließung der Dokumente (siehe dazu Mayr und Petras, 2008).
  10. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.00
    0.003261943 = product of:
      0.016309716 = sum of:
        0.016309716 = weight(_text_:system in 542) [ClassicSimilarity], result of:
          0.016309716 = score(doc=542,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.1217929 = fieldWeight in 542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=542)
      0.2 = coord(1/5)
    
    Abstract
    In the final phase of the project, a major evaluation effort is under way to test and measure the effectiveness of the vocabulary mappings in an information system environment. Actual user queries are tested in a distributed search environment, where several bibliographic databases with different controlled vocabularies are searched at the same time. Three query variations are compared to each other: a free-text search without focusing on using the controlled vocabulary or terminology mapping; a controlled vocabulary search, where terms from one vocabulary (a 'home' vocabulary thought to be familiar to the user of a particular database) are used to search all databases; and finally, a search, where controlled vocabulary terms are translated into the terms of the respective controlled vocabulary of the database. For evaluation purposes, types of cross-concordances are distinguished between intradisciplinary vocabularies (vocabularies within the social sciences) and interdisciplinary vocabularies (social sciences to other disciplines as well as other combinations). Simultaneously, an extensive quantitative analysis is conducted aimed at finding patterns in terminology mappings that can explain trends in the effectiveness of terminology mappings, particularly looking at overlapping terms, types of determined relations (equivalence, hierarchy etc.), size of participating vocabularies, etc. This project is the largest terminology mapping effort in Germany. The number and variety of controlled vocabularies targeted provide an optimal basis for insights and further research opportunities. To our knowledge, terminology mapping efforts have rarely been evaluated with stringent qualitative and quantitative measures. This research should contribute in this area. For the NKOS workshop, we plan to present an overview of the project and participating vocabularies, an introduction to the heterogeneity service and its application as well as some of the results and findings of the evaluation, which will be concluded in August.
  11. Mayr, P.: ¬Die virtuelle Steinsuppe : kooperatives Verwalten von elektronischen Ressourcen mit Digilink (2007) 0.00
    0.0031002287 = product of:
      0.015501143 = sum of:
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 567) [ClassicSimilarity], result of:
              0.04650343 = score(doc=567,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=567)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Wa(h)re Information: 29. Österreichischer Bibliothekartag Bregenz, 19.-23.9.2006. Hrsg.: Harald Weigel
  12. Mayr, P.; Tosques, F.: Webometrische Analysen mit Hilfe der Google Web APIs (2005) 0.00
    0.0027127003 = product of:
      0.013563501 = sum of:
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 3189) [ClassicSimilarity], result of:
              0.0406905 = score(doc=3189,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 3189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3189)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    12. 2.2005 18:29:36
  13. Daniel, F.; Maier, C.; Mayr, P.; Wirtz, H.-C.: ¬Die Kunden dort bedienen, wo sie sind : DigiAuskunft besteht Bewährungsprobe / Seit Anfang 2006 in Betrieb (2006) 0.00
    0.0026882975 = product of:
      0.013441487 = sum of:
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 5991) [ClassicSimilarity], result of:
              0.04032446 = score(doc=5991,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 5991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5991)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    8. 7.2006 21:06:22
  14. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.00
    0.0026882975 = product of:
      0.013441487 = sum of:
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
              0.04032446 = score(doc=2618,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2618)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  15. Mayr, P.: Bradfordizing als Re-Ranking-Ansatz in Literaturinformationssystemen (2011) 0.00
    0.0023251716 = product of:
      0.011625858 = sum of:
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 4292) [ClassicSimilarity], result of:
              0.034877572 = score(doc=4292,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 4292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4292)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    9. 2.2011 17:47:29
  16. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.00
    0.0023042548 = product of:
      0.011521274 = sum of:
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.03456382 = score(doc=328,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    22. 7.2012 19:25:54
  17. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.00
    0.0019202124 = product of:
      0.009601062 = sum of:
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
              0.028803186 = score(doc=2627,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2627)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas