Search (22 results, page 1 of 2)

  • × author_ss:"Mayr, P."
  1. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.04
    0.041292455 = product of:
      0.14452359 = sum of:
        0.02144774 = weight(_text_:library in 2618) [ClassicSimilarity], result of:
          0.02144774 = score(doc=2618,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.20335563 = fieldWeight in 2618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2618)
        0.12307586 = sum of:
          0.08503368 = weight(_text_:applications in 2618) [ClassicSimilarity], result of:
            0.08503368 = score(doc=2618,freq=4.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.4815245 = fieldWeight in 2618, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2618)
          0.038042177 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
            0.038042177 = score(doc=2618,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.2708308 = fieldWeight in 2618, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2618)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper reports about results on the GESIS-IZ project "Competence Center Modeling and Treatment of Semantic Heterogeneity" (KoMoHe). KoMoHe supervised a terminology mapping effort, in which 'cross-concordances' between major controlled vocabularies were organized, created and managed. In this paper we describe the establishment and implementation of crossconcordances for search in a digital library (DL).
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.04
    0.03547405 = product of:
      0.12415917 = sum of:
        0.036247853 = weight(_text_:systems in 2627) [ClassicSimilarity], result of:
          0.036247853 = score(doc=2627,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.29405114 = fieldWeight in 2627, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2627)
        0.08791132 = sum of:
          0.060738344 = weight(_text_:applications in 2627) [ClassicSimilarity], result of:
            0.060738344 = score(doc=2627,freq=4.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.34394607 = fieldWeight in 2627, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
          0.027172983 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
            0.027172983 = score(doc=2627,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.19345059 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
      0.2857143 = coord(2/7)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.03
    0.025692347 = product of:
      0.05994881 = sum of:
        0.02511325 = weight(_text_:systems in 2323) [ClassicSimilarity], result of:
          0.02511325 = score(doc=2323,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.2037246 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
        0.016451785 = product of:
          0.03290357 = sum of:
            0.03290357 = weight(_text_:29 in 2323) [ClassicSimilarity], result of:
              0.03290357 = score(doc=2323,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23319192 = fieldWeight in 2323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2323)
          0.5 = coord(1/2)
        0.018383777 = weight(_text_:library in 2323) [ClassicSimilarity], result of:
          0.018383777 = score(doc=2323,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.17430481 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
      0.42857143 = coord(3/7)
    
    Abstract
    The German Federal Ministry for Education and Research funded a major terminology mapping initiative, which found its conclusion in 2007. The task of this terminology mapping initiative was to organize, create and manage 'cross-concordances' between controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. 64 crosswalks with more than 500,000 relations were established. In the final phase of the project, a major evaluation effort to test and measure the effectiveness of the vocabulary mappings in an information system environment was conducted. The paper reports on the cross-concordance work and evaluation results.
    Content
    Beitrag während: World library and information congress: 74th IFLA general conference and council, 10-14 August 2008, Québec, Canada.
    Date
    26.12.2011 13:33:29
  4. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.02
    0.020578183 = product of:
      0.04801576 = sum of:
        0.025373496 = weight(_text_:systems in 542) [ClassicSimilarity], result of:
          0.025373496 = score(doc=542,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.20583579 = fieldWeight in 542, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.02734375 = fieldNorm(doc=542)
        0.00761029 = product of:
          0.01522058 = sum of:
            0.01522058 = weight(_text_:science in 542) [ClassicSimilarity], result of:
              0.01522058 = score(doc=542,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1440534 = fieldWeight in 542, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
        0.015031973 = product of:
          0.030063946 = sum of:
            0.030063946 = weight(_text_:applications in 542) [ClassicSimilarity], result of:
              0.030063946 = score(doc=542,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17024462 = fieldWeight in 542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    In 2004, the German Federal Ministry for Education and Research funded a major terminology mapping initiative at the GESIS Social Science Information Centre in Bonn (GESIS-IZ), which will find its conclusion this year. The task of this terminology mapping initiative was to organize, create and manage 'crossconcordances' between major controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. Cross-concordances are intellectually (manually) created crosswalks that determine equivalence, hierarchy, and association relations between terms from two controlled vocabularies. Most vocabularies have been related bilaterally, that is, there is a cross-concordance relating terms from vocabulary A to vocabulary B as well as a cross-concordance relating terms from vocabulary B to vocabulary A (bilateral relations are not necessarily symmetrical). Till August 2007, 24 controlled vocabularies from 11 disciplines will be connected with vocabulary sizes ranging from 2,000 - 17,000 terms per vocabulary. To date more than 260,000 relations are generated. A database including all vocabularies and cross-concordances was built and a 'heterogeneity service' developed, a web service, which makes the cross-concordances available for other applications. Many cross-concordances are already implemented and utilized for the German Social Science Information Portal Sowiport (www.sowiport.de), which searches bibliographical and other information resources (incl. 13 databases with 10 different vocabularies and ca. 2.5 million references).
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  5. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.02
    0.019063551 = product of:
      0.044481616 = sum of:
        0.0076875538 = product of:
          0.0153751075 = sum of:
            0.0153751075 = weight(_text_:science in 1909) [ClassicSimilarity], result of:
              0.0153751075 = score(doc=1909,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1455159 = fieldWeight in 1909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1909)
          0.5 = coord(1/2)
        0.015319815 = weight(_text_:library in 1909) [ClassicSimilarity], result of:
          0.015319815 = score(doc=1909,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.14525402 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
        0.021474248 = product of:
          0.042948496 = sum of:
            0.042948496 = weight(_text_:applications in 1909) [ClassicSimilarity], result of:
              0.042948496 = score(doc=1909,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2432066 = fieldWeight in 1909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1909)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Source
    Library review. 57(2008) no.3, S.213-224
  6. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.01
    0.011740438 = product of:
      0.04109153 = sum of:
        0.02511325 = weight(_text_:systems in 649) [ClassicSimilarity], result of:
          0.02511325 = score(doc=649,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.2037246 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
        0.015978282 = product of:
          0.031956565 = sum of:
            0.031956565 = weight(_text_:science in 649) [ClassicSimilarity], result of:
              0.031956565 = score(doc=649,freq=6.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.30244917 = fieldWeight in 649, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=649)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper is about a better understanding of the structure and dynamics of science and the usage of these insights for compensating the typical problems that arises in metadata-driven Digital Libraries. Three science model driven retrieval services are presented: co-word analysis based query expansion, re-ranking via Bradfordizing and author centrality. The services are evaluated with relevance assessments from which two important implications emerge: (1) precision values of the retrieval services are the same or better than the tf-idf retrieval baseline and (2) each service retrieved a disjoint set of documents. The different services each favor quite other - but still relevant - documents than pure term-frequency based rankings. The proposed models and derived retrieval services therefore open up new viewpoints on the scientific knowledge space and provide an alternative framework to structure scholarly information systems.
  7. Mayr, P.; Umstätter, W.: ¬Eine bibliometrische Zeitschriftenanalyse mit Jol Scientrometrics und NfD bzw. IWP (2008) 0.01
    0.009202948 = product of:
      0.032210317 = sum of:
        0.010762575 = product of:
          0.02152515 = sum of:
            0.02152515 = weight(_text_:science in 2302) [ClassicSimilarity], result of:
              0.02152515 = score(doc=2302,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20372227 = fieldWeight in 2302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2302)
          0.5 = coord(1/2)
        0.02144774 = weight(_text_:library in 2302) [ClassicSimilarity], result of:
          0.02144774 = score(doc=2302,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.20335563 = fieldWeight in 2302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2302)
      0.2857143 = coord(2/7)
    
    Abstract
    In der Studie sind 3.889 Datensätze analysiert worden, die im Zeitraum 1976-2004 in der Datenbank Library and Information Science Abstracts (LISA) im Forschungsbereich der Informetrie nachgewiesen sind und das Wachstum auf diesem Gebiet belegen. Die Studie zeigt anhand einer Bradford-Verteilung (power law) die Kernzeitschriften in diesem Feld und bestätigt auf der Basis dieses LISA-Datensatzes, dass die Gründung einer neuen Zeitschrift, "Journals of Informetrics" (JoI), 2007 etwa zur rechten Zeit erfolgte. Im Verhältnis dazu wird die Entwicklung der Zeitschrift Scientometrics betrachtet und auch die der "Nachrichten für Dokumentation" (NfD) bzw. "Information - Wissenschaft und Praxis" (IWP).
  8. Momeni, F.; Mayr, P.: Analyzing the research output presented at European Networked Knowledge Organization Systems workshops (2000-2015) (2016) 0.01
    0.005178265 = product of:
      0.036247853 = sum of:
        0.036247853 = weight(_text_:systems in 3106) [ClassicSimilarity], result of:
          0.036247853 = score(doc=3106,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.29405114 = fieldWeight in 3106, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3106)
      0.14285715 = coord(1/7)
    
    Abstract
    In this paper we analyze a major part of the research output of the Networked Knowledge Organization Systems (NKOS) community in the period 2000 to 2015 from a network analytical perspective. We fo- cus on the paper output presented at the European NKOS workshops in the last 15 years. Our open dataset, the "NKOS bibliography", includes 14 workshop agendas (ECDL 2000-2010, TPDL 2011-2015) and 4 special issues on NKOS (2001, 2004, 2006 and 2015) which cover 171 papers with 218 distinct authors in total. A focus of the analysis is the visualization of co-authorship networks in this interdisciplinary eld. We used standard network analytic measures like degree and betweenness centrality to de- scribe the co-authorship distribution in our NKOS dataset. We can see in our dataset that 15% (with degree=0) of authors had no co-authorship with others and 53% of them had a maximum of 3 cooperations with other authors. 32% had at least 4 co-authors for all of their papers. The NKOS co-author network in the "NKOS bibliography" is a typical co- authorship network with one relatively large component, many smaller components and many isolated co-authorships or triples.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  9. Mayr, P.; Walter, A.-K.: Mapping Knowledge Organization Systems (2008) 0.01
    0.0050736424 = product of:
      0.035515495 = sum of:
        0.035515495 = weight(_text_:systems in 1676) [ClassicSimilarity], result of:
          0.035515495 = score(doc=1676,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.28811008 = fieldWeight in 1676, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=1676)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Vernetzung der Informationssysteme und Datenbanken aus dem wissenschaftlichen Fachinformationsbereich lässt bislang den Aspekt der Kompatibilität und Konkordanz zwischen kontrollierten Vokabularen (semantische Heterogenität) weitgehend unberücksichtigt. Gerade aber für den inhaltlichen Zugang sachlich heterogen erschlössener Bestände spielen für den Nutzer die semantischen Querverbindungen (Mappings /Crosskonkordanzen) zwischen den zugrunde liegenden Knowledge Organization Systems (KOS) der Datenbanken eine entscheidende Rolle. Der Beitrag stellt Einsatzmöglichkeiten und Beispiele von Crosskonkordanzen (CK) im Projekt "Kompetenznetzwerk Modellbildung und Heterogenitätsbehandlung" (KoMoHe) sowie das Netz der bis dato entstandenen Terminolögie-Überstiege vor. Die am IZ entstandenen CK sollen künftig über einen Terminolögie-Service als Web Service genutzt werden, dieser wird im Beitrag exemplarisch vorgestellt.
  10. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.0037906717 = product of:
      0.0265347 = sum of:
        0.0265347 = weight(_text_:library in 2580) [ClassicSimilarity], result of:
          0.0265347 = score(doc=2580,freq=6.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.25158736 = fieldWeight in 2580, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2580)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the deep web. In addition, we bring a new concept into the discussion, the academic invisible web (AIW). We define the academic invisible web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the invisible web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the invisible web. Findings: Bergman's size estimate of the invisible web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the academic invisible web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the academic invisible web.
    Source
    Library hi tech. 24(2006) no.4, S.529-539
  11. Mayr, P.: ¬Die virtuelle Steinsuppe : kooperatives Verwalten von elektronischen Ressourcen mit Digilink (2007) 0.00
    0.0031336735 = product of:
      0.021935713 = sum of:
        0.021935713 = product of:
          0.043871425 = sum of:
            0.043871425 = weight(_text_:29 in 567) [ClassicSimilarity], result of:
              0.043871425 = score(doc=567,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.31092256 = fieldWeight in 567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=567)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Wa(h)re Information: 29. Österreichischer Bibliothekartag Bregenz, 19.-23.9.2006. Hrsg.: Harald Weigel
  12. Carevic, Z.; Krichel, T.; Mayr, P.: Assessing a human mediated current awareness service (2015) 0.00
    0.0031062409 = product of:
      0.021743685 = sum of:
        0.021743685 = product of:
          0.04348737 = sum of:
            0.04348737 = weight(_text_:science in 2992) [ClassicSimilarity], result of:
              0.04348737 = score(doc=2992,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.41158113 = fieldWeight in 2992, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2992)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  13. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.0030950701 = product of:
      0.02166549 = sum of:
        0.02166549 = weight(_text_:library in 3752) [ClassicSimilarity], result of:
          0.02166549 = score(doc=3752,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.2054202 = fieldWeight in 3752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  14. Mayr, P.; Tosques, F.: Webometrische Analysen mit Hilfe der Google Web APIs (2005) 0.00
    0.002741964 = product of:
      0.019193748 = sum of:
        0.019193748 = product of:
          0.038387496 = sum of:
            0.038387496 = weight(_text_:29 in 3189) [ClassicSimilarity], result of:
              0.038387496 = score(doc=3189,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.27205724 = fieldWeight in 3189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3189)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    12. 2.2005 18:29:36
  15. Daniel, F.; Maier, C.; Mayr, P.; Wirtz, H.-C.: ¬Die Kunden dort bedienen, wo sie sind : DigiAuskunft besteht Bewährungsprobe / Seit Anfang 2006 in Betrieb (2006) 0.00
    0.0027172985 = product of:
      0.019021088 = sum of:
        0.019021088 = product of:
          0.038042177 = sum of:
            0.038042177 = weight(_text_:22 in 5991) [ClassicSimilarity], result of:
              0.038042177 = score(doc=5991,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2708308 = fieldWeight in 5991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5991)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    8. 7.2006 21:06:22
  16. Mayr, P.; Mutschke, P.; Petras, V.; Schaer, P.; Sure, Y.: Applying science models for search (2010) 0.00
    0.0024849928 = product of:
      0.017394949 = sum of:
        0.017394949 = product of:
          0.034789898 = sum of:
            0.034789898 = weight(_text_:science in 4663) [ClassicSimilarity], result of:
              0.034789898 = score(doc=4663,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.3292649 = fieldWeight in 4663, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4663)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    The paper proposes three different kinds of science models as value-added services that are integrated in the retrieval process to enhance retrieval quailty. The paper discusses the approaches Search Term Recommendation, Bradfordizing and Author Centrality on a general level and addresses implementation issues of the models within a real-life retrieval environment.
  17. Mayr, P.: Bradfordizing als Re-Ranking-Ansatz in Literaturinformationssystemen (2011) 0.00
    0.0023502551 = product of:
      0.016451785 = sum of:
        0.016451785 = product of:
          0.03290357 = sum of:
            0.03290357 = weight(_text_:29 in 4292) [ClassicSimilarity], result of:
              0.03290357 = score(doc=4292,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23319192 = fieldWeight in 4292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4292)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    9. 2.2011 17:47:29
  18. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.00
    0.0023291127 = product of:
      0.016303789 = sum of:
        0.016303789 = product of:
          0.032607578 = sum of:
            0.032607578 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.032607578 = score(doc=328,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2012 19:25:54
  19. Mutschke, P.; Mayr, P.: Science models for search : a study on combining scholarly information retrieval and scientometrics (2015) 0.00
    0.002196444 = product of:
      0.0153751075 = sum of:
        0.0153751075 = product of:
          0.030750215 = sum of:
            0.030750215 = weight(_text_:science in 1695) [ClassicSimilarity], result of:
              0.030750215 = score(doc=1695,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2910318 = fieldWeight in 1695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1695)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  20. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.00
    0.0013178664 = product of:
      0.009225064 = sum of:
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 38) [ClassicSimilarity], result of:
              0.018450128 = score(doc=38,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 38, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.