Search (12 results, page 1 of 1)

  • × author_ss:"Mayr, P."
  • × language_ss:"e"
  1. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.0026555075 = product of:
      0.029210582 = sum of:
        0.029210582 = product of:
          0.058421165 = sum of:
            0.058421165 = weight(_text_:web in 3752) [ClassicSimilarity], result of:
              0.058421165 = score(doc=3752,freq=20.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.5701118 = fieldWeight in 3752, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3752)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
    Content
    Bezug zu: Bergman, M.K.: The Deep Web: surfacing hidden value. In: Journal of Electronic Publishing. 7(2001) no.1, S.xxx-xxx. [Vgl. unter: http://www.press.umich.edu/jep/07-01/bergman.html].
  2. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.00
    0.0026463594 = product of:
      0.014554976 = sum of:
        0.009237197 = product of:
          0.018474394 = sum of:
            0.018474394 = weight(_text_:web in 2627) [ClassicSimilarity], result of:
              0.018474394 = score(doc=2627,freq=2.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.18028519 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2627)
          0.5 = coord(1/2)
        0.005317778 = product of:
          0.021271111 = sum of:
            0.021271111 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
              0.021271111 = score(doc=2627,freq=2.0), product of:
                0.10995631 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031399675 = queryNorm
                0.19345059 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2627)
          0.25 = coord(1/4)
      0.18181819 = coord(2/11)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.0025192357 = product of:
      0.02771159 = sum of:
        0.02771159 = product of:
          0.05542318 = sum of:
            0.05542318 = weight(_text_:web in 2580) [ClassicSimilarity], result of:
              0.05542318 = score(doc=2580,freq=18.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.5408555 = fieldWeight in 2580, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2580)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the deep web. In addition, we bring a new concept into the discussion, the academic invisible web (AIW). We define the academic invisible web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the invisible web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the invisible web. Findings: Bergman's size estimate of the invisible web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the academic invisible web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the academic invisible web.
  4. Carevic, Z.; Krichel, T.; Mayr, P.: Assessing a human mediated current awareness service (2015) 0.00
    0.0022460679 = product of:
      0.024706746 = sum of:
        0.024706746 = product of:
          0.09882698 = sum of:
            0.09882698 = weight(_text_:z in 2992) [ClassicSimilarity], result of:
              0.09882698 = score(doc=2992,freq=2.0), product of:
                0.1675899 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.031399675 = queryNorm
                0.58969533 = fieldWeight in 2992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2992)
          0.25 = coord(1/4)
      0.09090909 = coord(1/11)
    
  5. Mayr, P.; Mutschke, P.; Petras, V.; Schaer, P.; Sure, Y.: Applying science models for search (2010) 0.00
    0.0017527736 = product of:
      0.01928051 = sum of:
        0.01928051 = weight(_text_:und in 4663) [ClassicSimilarity], result of:
          0.01928051 = score(doc=4663,freq=4.0), product of:
            0.069593206 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.031399675 = queryNorm
            0.27704588 = fieldWeight in 4663, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4663)
      0.09090909 = coord(1/11)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  6. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.00
    0.0017453778 = product of:
      0.019199155 = sum of:
        0.019199155 = product of:
          0.03839831 = sum of:
            0.03839831 = weight(_text_:web in 38) [ClassicSimilarity], result of:
              0.03839831 = score(doc=38,freq=6.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.37471575 = fieldWeight in 38, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  7. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.0014544814 = product of:
      0.015999295 = sum of:
        0.015999295 = product of:
          0.03199859 = sum of:
            0.03199859 = weight(_text_:web in 3144) [ClassicSimilarity], result of:
              0.03199859 = score(doc=3144,freq=6.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.3122631 = fieldWeight in 3144, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
  8. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.00
    0.001187579 = product of:
      0.013063369 = sum of:
        0.013063369 = product of:
          0.026126739 = sum of:
            0.026126739 = weight(_text_:web in 1909) [ClassicSimilarity], result of:
              0.026126739 = score(doc=1909,freq=4.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.25496176 = fieldWeight in 1909, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1909)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Theme
    Semantic Web
  9. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.00
    9.295487E-4 = product of:
      0.010225035 = sum of:
        0.010225035 = weight(_text_:und in 649) [ClassicSimilarity], result of:
          0.010225035 = score(doc=649,freq=2.0), product of:
            0.069593206 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.031399675 = queryNorm
            0.14692576 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.09090909 = coord(1/11)
    
    Series
    Bibliotheca Academica - Reihe Informations- und Bibliothekswissenschaften; Bd. 1
  10. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.00
    6.768081E-4 = product of:
      0.0074448893 = sum of:
        0.0074448893 = product of:
          0.029779557 = sum of:
            0.029779557 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
              0.029779557 = score(doc=2618,freq=2.0), product of:
                0.10995631 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031399675 = queryNorm
                0.2708308 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2618)
          0.25 = coord(1/4)
      0.09090909 = coord(1/11)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.00
    5.8782165E-4 = product of:
      0.006466038 = sum of:
        0.006466038 = product of:
          0.012932076 = sum of:
            0.012932076 = weight(_text_:web in 542) [ClassicSimilarity], result of:
              0.012932076 = score(doc=542,freq=2.0), product of:
                0.10247317 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.031399675 = queryNorm
                0.12619963 = fieldWeight in 542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.09090909 = coord(1/11)
    
    Abstract
    In 2004, the German Federal Ministry for Education and Research funded a major terminology mapping initiative at the GESIS Social Science Information Centre in Bonn (GESIS-IZ), which will find its conclusion this year. The task of this terminology mapping initiative was to organize, create and manage 'crossconcordances' between major controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. Cross-concordances are intellectually (manually) created crosswalks that determine equivalence, hierarchy, and association relations between terms from two controlled vocabularies. Most vocabularies have been related bilaterally, that is, there is a cross-concordance relating terms from vocabulary A to vocabulary B as well as a cross-concordance relating terms from vocabulary B to vocabulary A (bilateral relations are not necessarily symmetrical). Till August 2007, 24 controlled vocabularies from 11 disciplines will be connected with vocabulary sizes ranging from 2,000 - 17,000 terms per vocabulary. To date more than 260,000 relations are generated. A database including all vocabularies and cross-concordances was built and a 'heterogeneity service' developed, a web service, which makes the cross-concordances available for other applications. Many cross-concordances are already implemented and utilized for the German Social Science Information Portal Sowiport (www.sowiport.de), which searches bibliographical and other information resources (incl. 13 databases with 10 different vocabularies and ca. 2.5 million references).
  12. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.00
    5.853872E-4 = product of:
      0.0064392593 = sum of:
        0.0064392593 = product of:
          0.025757037 = sum of:
            0.025757037 = weight(_text_:29 in 2323) [ClassicSimilarity], result of:
              0.025757037 = score(doc=2323,freq=2.0), product of:
                0.11045424 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.031399675 = queryNorm
                0.23319192 = fieldWeight in 2323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2323)
          0.25 = coord(1/4)
      0.09090909 = coord(1/11)
    
    Date
    26.12.2011 13:33:29