Search (10 results, page 1 of 1)

  • × author_ss:"Mayr, P."
  1. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.05
    0.05353949 = product of:
      0.10707898 = sum of:
        0.08344315 = weight(_text_:open in 250) [ClassicSimilarity], result of:
          0.08344315 = score(doc=250,freq=8.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.39803052 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.023635827 = product of:
          0.047271654 = sum of:
            0.047271654 = weight(_text_:access in 250) [ClassicSimilarity], result of:
              0.047271654 = score(doc=250,freq=8.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.29958594 = fieldWeight in 250, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.03125 = fieldNorm(doc=250)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Es handelt sich um eine bibliometrische Untersuchung der Entwicklung der Open-Access-Verfügbarkeit wissenschaftlicher Zeitschriftenartikel in Deutschland, die im Zeitraum 2010-18 erschienen und im Web of Science indexiert sind. Ein besonderes Augenmerk der Analyse lag auf der Frage, ob und inwiefern sich die Open-Access-Profile der Universitäten und außeruniversitären Wissenschaftseinrichtungen in Deutschland voneinander unterscheiden.
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
  2. Mayr, P.; Walter, A.-K.: Abdeckung und Aktualität des Suchdienstes Google Scholar (2006) 0.02
    0.015645592 = product of:
      0.062582366 = sum of:
        0.062582366 = weight(_text_:open in 5131) [ClassicSimilarity], result of:
          0.062582366 = score(doc=5131,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 5131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=5131)
      0.25 = coord(1/4)
    
    Abstract
    Der Beitrag widmet sich dem neuen Google-Suchdienst Google Scholar. Die Suchmaschine, die ausschließlich wissenschaftliche Dokumente durchsuchen soll, wird mit ihren wichtigsten Funktionen beschrieben und anschließend einem empirischen Test unterzogen. Die durchgeführte Studie basiert auf drei Zeitschriftenlisten: Zeitschriften von Thomson Scientific, Open AccessZeitschriften des Verzeichnisses DOAJ und in der Fachdatenbank SOLIS ausgewertete sozialwissenschaftliche Zeitschriften. Die Abdeckung dieser Zeitschriften durch Google Scholar wurde per Abfrage der Zeitschriftentitel überprüft. Die Studie zeigt Defizite in der Abdeckung und Aktualität des Google Scholarlndex. Weiterhin macht die Studie deutlich, wer die wichtigsten Datenlieferanten für den neuen Suchdienst sind und welche wissenschaftlichen Informationsquellen im Index repräsentiert sind. Die Pluspunkte von Google Scholar liegen in seiner Einfachheit, seiner Suchgeschwindigkeit und letztendlich seiner Kostenfreiheit. Die Recherche in Fachdatenbanken kann Google Scholar trotz sichtbarer Potenziale (z. B. Zitationsanalyse) aber heute aufgrund mangelnder fachlicher Abdeckung und Transparenz nicht ersetzen.
  3. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.02
    0.015645592 = product of:
      0.062582366 = sum of:
        0.062582366 = weight(_text_:open in 3392) [ClassicSimilarity], result of:
          0.062582366 = score(doc=3392,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 3392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3392)
      0.25 = coord(1/4)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
  4. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.02
    0.015645592 = product of:
      0.062582366 = sum of:
        0.062582366 = weight(_text_:open in 649) [ClassicSimilarity], result of:
          0.062582366 = score(doc=649,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.25 = coord(1/4)
    
    Abstract
    This paper is about a better understanding of the structure and dynamics of science and the usage of these insights for compensating the typical problems that arises in metadata-driven Digital Libraries. Three science model driven retrieval services are presented: co-word analysis based query expansion, re-ranking via Bradfordizing and author centrality. The services are evaluated with relevance assessments from which two important implications emerge: (1) precision values of the retrieval services are the same or better than the tf-idf retrieval baseline and (2) each service retrieved a disjoint set of documents. The different services each favor quite other - but still relevant - documents than pure term-frequency based rankings. The proposed models and derived retrieval services therefore open up new viewpoints on the scientific knowledge space and provide an alternative framework to structure scholarly information systems.
  5. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.02
    0.015645592 = product of:
      0.062582366 = sum of:
        0.062582366 = weight(_text_:open in 38) [ClassicSimilarity], result of:
          0.062582366 = score(doc=38,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.25 = coord(1/4)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
  6. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.02
    0.015270403 = product of:
      0.06108161 = sum of:
        0.06108161 = sum of:
          0.029544784 = weight(_text_:access in 2627) [ClassicSimilarity], result of:
            0.029544784 = score(doc=2627,freq=2.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.18724121 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
          0.03153683 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
            0.03153683 = score(doc=2627,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.19345059 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  7. Momeni, F.; Mayr, P.: Analyzing the research output presented at European Networked Knowledge Organization Systems workshops (2000-2015) (2016) 0.01
    0.013037993 = product of:
      0.05215197 = sum of:
        0.05215197 = weight(_text_:open in 3106) [ClassicSimilarity], result of:
          0.05215197 = score(doc=3106,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.24876907 = fieldWeight in 3106, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3106)
      0.25 = coord(1/4)
    
    Abstract
    In this paper we analyze a major part of the research output of the Networked Knowledge Organization Systems (NKOS) community in the period 2000 to 2015 from a network analytical perspective. We fo- cus on the paper output presented at the European NKOS workshops in the last 15 years. Our open dataset, the "NKOS bibliography", includes 14 workshop agendas (ECDL 2000-2010, TPDL 2011-2015) and 4 special issues on NKOS (2001, 2004, 2006 and 2015) which cover 171 papers with 218 distinct authors in total. A focus of the analysis is the visualization of co-authorship networks in this interdisciplinary eld. We used standard network analytic measures like degree and betweenness centrality to de- scribe the co-authorship distribution in our NKOS dataset. We can see in our dataset that 15% (with degree=0) of authors had no co-authorship with others and 53% of them had a maximum of 3 cooperations with other authors. 32% had at least 4 co-authors for all of their papers. The NKOS co-author network in the "NKOS bibliography" is a typical co- authorship network with one relatively large component, many smaller components and many isolated co-authorships or triples.
  8. Daniel, F.; Maier, C.; Mayr, P.; Wirtz, H.-C.: ¬Die Kunden dort bedienen, wo sie sind : DigiAuskunft besteht Bewährungsprobe / Seit Anfang 2006 in Betrieb (2006) 0.01
    0.005518945 = product of:
      0.02207578 = sum of:
        0.02207578 = product of:
          0.04415156 = sum of:
            0.04415156 = weight(_text_:22 in 5991) [ClassicSimilarity], result of:
              0.04415156 = score(doc=5991,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.2708308 = fieldWeight in 5991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5991)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    8. 7.2006 21:06:22
  9. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.01
    0.005518945 = product of:
      0.02207578 = sum of:
        0.02207578 = product of:
          0.04415156 = sum of:
            0.04415156 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
              0.04415156 = score(doc=2618,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.2708308 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2618)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  10. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.00
    0.0047305245 = product of:
      0.018922098 = sum of:
        0.018922098 = product of:
          0.037844196 = sum of:
            0.037844196 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.037844196 = score(doc=328,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2012 19:25:54