Search (12 results, page 1 of 1)

  • × author_ss:"Mayr, P."
  • × year_i:[2010 TO 2020}
  1. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.02
    0.017528716 = product of:
      0.035057433 = sum of:
        0.014565565 = weight(_text_:information in 328) [ClassicSimilarity], result of:
          0.014565565 = score(doc=328,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 328, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.04098374 = score(doc=328,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In diesem Artikel wird ein Eyetracking-Experiment beschrieben, bei dem untersucht wurde, wann und auf Basis welcher Informationen Relevanzentscheidungen bei der themenbezogenen Dokumentenbewertung fallen und welche Faktoren auf die Relevanzentscheidung einwirken. Nach einer kurzen Einführung werden relevante Studien aufgeführt, in denen Eyetracking als Untersuchungsmethode für Interaktionsverhalten mit Ergebnislisten (Information Seeking Behavior) verwendet wurde. Nutzerverhalten wird hierbei vor allem durch unterschiedliche Aufgaben-Typen, dargestellte Informationen und durch das Ranking eines Ergebnisses beeinflusst. Durch EyetrackingUntersuchungen lassen sich Nutzer außerdem in verschiedene Klassen von Bewertungs- und Lesetypen einordnen. Diese Informationen können als implizites Feedback genutzt werden, um so die Suche zu personalisieren und um die Relevanz von Suchergebnissen ohne aktives Zutun des Users zu erhöhen. In einem explorativen Eyetracking-Experiment mit 12 Studenten der Hochschule Darmstadt werden anhand der Länge der Gesamtbewertung, Anzahl der Fixationen, Anzahl der besuchten Metadatenelemente und Länge des Scanpfades zwei typische Bewertungstypen identifiziert. Das Metadatenfeld Abstract wird im Experiment zuverlässig als wichtigste Dokumenteigenschaft für die Zuordnung von Relevanz ermittelt.
    Date
    22. 7.2012 19:25:54
    Source
    Information - Wissenschaft und Praxis. 63(2012) H.3, S.145-156
  2. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.02
    0.015770756 = product of:
      0.03154151 = sum of:
        0.01029941 = weight(_text_:information in 649) [ClassicSimilarity], result of:
          0.01029941 = score(doc=649,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 649) [ClassicSimilarity], result of:
              0.042484205 = score(doc=649,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=649)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper is about a better understanding of the structure and dynamics of science and the usage of these insights for compensating the typical problems that arises in metadata-driven Digital Libraries. Three science model driven retrieval services are presented: co-word analysis based query expansion, re-ranking via Bradfordizing and author centrality. The services are evaluated with relevance assessments from which two important implications emerge: (1) precision values of the retrieval services are the same or better than the tf-idf retrieval baseline and (2) each service retrieved a disjoint set of documents. The different services each favor quite other - but still relevant - documents than pure term-frequency based rankings. The proposed models and derived retrieval services therefore open up new viewpoints on the scientific knowledge space and provide an alternative framework to structure scholarly information systems.
    Source
    Concepts in context: Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010. Eds.: F. Boteram, W. Gödert u. J. Hubrich
  3. Momeni, F.; Mayr, P.: Analyzing the research output presented at European Networked Knowledge Organization Systems workshops (2000-2015) (2016) 0.01
    0.0076650837 = product of:
      0.030660335 = sum of:
        0.030660335 = product of:
          0.06132067 = sum of:
            0.06132067 = weight(_text_:organization in 3106) [ClassicSimilarity], result of:
              0.06132067 = score(doc=3106,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.34114468 = fieldWeight in 3106, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3106)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this paper we analyze a major part of the research output of the Networked Knowledge Organization Systems (NKOS) community in the period 2000 to 2015 from a network analytical perspective. We fo- cus on the paper output presented at the European NKOS workshops in the last 15 years. Our open dataset, the "NKOS bibliography", includes 14 workshop agendas (ECDL 2000-2010, TPDL 2011-2015) and 4 special issues on NKOS (2001, 2004, 2006 and 2015) which cover 171 papers with 218 distinct authors in total. A focus of the analysis is the visualization of co-authorship networks in this interdisciplinary eld. We used standard network analytic measures like degree and betweenness centrality to de- scribe the co-authorship distribution in our NKOS dataset. We can see in our dataset that 15% (with degree=0) of authors had no co-authorship with others and 53% of them had a maximum of 3 cooperations with other authors. 32% had at least 4 co-authors for all of their papers. The NKOS co-author network in the "NKOS bibliography" is a typical co- authorship network with one relatively large component, many smaller components and many isolated co-authorships or triples.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  4. Mayr, P.; Scharnhorst, A.: Scientometrics and information retrieval - weak-links revitalized (2015) 0.01
    0.0072827823 = product of:
      0.02913113 = sum of:
        0.02913113 = weight(_text_:information in 1688) [ClassicSimilarity], result of:
          0.02913113 = score(doc=1688,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3291521 = fieldWeight in 1688, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1688)
      0.25 = coord(1/4)
    
    Footnote
    Editorial zu einem Special Issue "Combining bibliometrics and information retrieval"
  5. Mutschke, P.; Mayr, P.: Science models for search : a study on combining scholarly information retrieval and scientometrics (2015) 0.01
    0.006068985 = product of:
      0.02427594 = sum of:
        0.02427594 = weight(_text_:information in 1695) [ClassicSimilarity], result of:
          0.02427594 = score(doc=1695,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 1695, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1695)
      0.25 = coord(1/4)
    
    Footnote
    Beitrag in einem Special Issue "Combining bibliometrics and information retrieval"
  6. Carevic, Z.; Krichel, T.; Mayr, P.: Assessing a human mediated current awareness service (2015) 0.01
    0.006068985 = product of:
      0.02427594 = sum of:
        0.02427594 = weight(_text_:information in 2992) [ClassicSimilarity], result of:
          0.02427594 = score(doc=2992,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 2992, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2992)
      0.25 = coord(1/4)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  7. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.01
    0.0053105257 = product of:
      0.021242103 = sum of:
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 3392) [ClassicSimilarity], result of:
              0.042484205 = score(doc=3392,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 3392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3392)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
  8. Mayr, P.: Information Retrieval-Mehrwertdienste für Digitale Bibliotheken: : Crosskonkordanzen und Bradfordizing (2010) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 4910) [ClassicSimilarity], result of:
          0.017839102 = score(doc=4910,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 4910, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
      0.25 = coord(1/4)
    
    RSWK
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
    Subject
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
  9. Mayr, P.; Mutschke, P.; Petras, V.; Schaer, P.; Sure, Y.: Applying science models for search (2010) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 4663) [ClassicSimilarity], result of:
          0.013732546 = score(doc=4663,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 4663, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4663)
      0.25 = coord(1/4)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  10. Mayr, P.; Mutschke, P.; Schaer, P.; Sure, Y.: Mehrwertdienste für das Information Retrieval (2013) 0.00
    0.0030039945 = product of:
      0.012015978 = sum of:
        0.012015978 = weight(_text_:information in 935) [ClassicSimilarity], result of:
          0.012015978 = score(doc=935,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=935)
      0.25 = coord(1/4)
    
  11. Mayr, P.: Bradfordizing als Re-Ranking-Ansatz in Literaturinformationssystemen (2011) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 4292) [ClassicSimilarity], result of:
          0.01029941 = score(doc=4292,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 4292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4292)
      0.25 = coord(1/4)
    
    Source
    Information - Wissenschaft und Praxis. 62(2011) H.1, S.3-10
  12. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 3144) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3144,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
      0.25 = coord(1/4)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.