Search (7 results, page 1 of 1)

  • × author_ss:"Stock, W.G."
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Garfield, E.; Stock, W.G.: Citation Consciousness : Interview with Eugene Garfiels, chairman emeritus of ISI; Philadelphia (2002) 0.02
    0.01767555 = product of:
      0.0353511 = sum of:
        0.0353511 = product of:
          0.0707022 = sum of:
            0.0707022 = weight(_text_:22 in 613) [ClassicSimilarity], result of:
              0.0707022 = score(doc=613,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38690117 = fieldWeight in 613, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=613)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Password. 2002, H.6, S.22-25
  2. Stock, W.G.: On relevance distributions (2006) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 5116) [ClassicSimilarity], result of:
              0.043561947 = score(doc=5116,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 5116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5116)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    There are at least three possible ways that documents are distributed by relevance: informetric (power law), inverse logistic, and dichotomous. The nature of the type of distribution has implications for the construction of relevance ranking algorithms for search engines, for automated (blind) relevance feedback, for user behavior when using Web search engines, for combining of outputs of search engines for metasearch, for topic detection and tracking, and for the methodology of evaluation of information retrieval systems.
  3. Schmidt, S.; Stock, W.G.: Collective indexing of emotions in images : a study in emotional information retrieval (2009) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 2792) [ClassicSimilarity], result of:
              0.038503684 = score(doc=2792,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 2792, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2792)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Some documents provoke emotions in people viewing them. Will it be possible to describe emotions consistently and use this information in retrieval systems? We tested collective (statistically aggregated) emotion indexing using images as examples. Considering psychological results, basic emotions are anger, disgust, fear, happiness, and sadness. This study follows an approach developed by Lee and Neal (2007) for music emotion retrieval and applies scroll bars for tagging basic emotions and their intensities. A sample comprising 763 persons tagged emotions caused by images (retrieved from www.Flickr.com) applying scroll bars and (linguistic) tags. Using SPSS, we performed descriptive statistics and correlation analysis. For more than half of the images, the test persons have clear emotion favorites. There are prototypical images for given emotions. The document-specific consistency of tagging using a scroll bar is, for some images, very high. Most of the (most commonly used) linguistic tags are on the basic level (in the sense of Rosch's basic level theory). The distributions of the linguistic tags in our examples follow an inverse power-law. Hence, it seems possible to apply collective image emotion tagging to image information systems and to present a new search option for basic emotions. This article is one of the first steps in the research area of emotional information retrieval (EmIR).
  4. Stock, W.G.; Weber, S.: Facets of informetrics : Preface (2006) 0.01
    0.0094314385 = product of:
      0.018862877 = sum of:
        0.018862877 = product of:
          0.037725754 = sum of:
            0.037725754 = weight(_text_:systems in 76) [ClassicSimilarity], result of:
              0.037725754 = score(doc=76,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2352409 = fieldWeight in 76, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=76)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    According to Jean M. Tague-Sutcliffe "informetrics" is "the study of the quantitative aspects of information in any form, not just records or bibliographies, and in any social group, not just scientists" (Tague-Sutcliffe, 1992, 1). Leo Egghe also defines "informetrics" in a very broad sense. "(W)e will use the term' informetrics' as the broad term comprising all-metrics studies related to information science, including bibliometrics (bibliographies, libraries,...), scientometrics (science policy, citation analysis, research evaluation,...), webometrics (metrics of the web, the Internet or other social networks such as citation or collaboration networks), ..." (Egghe, 2005b,1311). According to Concepcion S. Wilson "informetrics" is "the quantitative study of collections of moderatesized units of potentially informative text, directed to the scientific understanding of information processes at the social level" (Wilson, 1999, 211). We should add to Wilson's units of text also digital collections of images, videos, spoken documents and music. Dietmar Wolfram divides "informetrics" into two aspects, "system-based characteristics that arise from the documentary content of IR systems and how they are indexed, and usage-based characteristics that arise how users interact with system content and the system interfaces that provide access to the content" (Wolfram, 2003, 6). We would like to follow Tague-Sutcliffe, Egghe, Wilson and Wolfram (and others, for example Björneborn & Ingwersen, 2004) and call this broad research of empirical information science "informetrics". Informetrics includes therefore all quantitative studies in information science. If a scientist performs scientific investigations empirically, e.g. on information users' behavior, on scientific impact of academic journals, on the development of the patent application activity of a company, on links of Web pages, on the temporal distribution of blog postings discussing a given topic, on availability, recall and precision of retrieval systems, on usability of Web sites, and so on, he or she contributes to informetrics. We see three subject areas in information science in which such quantitative research takes place, - information users and information usage, - evaluation of information systems, - information itself, Following Wolfram's article, we divide his system-based characteristics into the "information itself "-category and the "information system"-category. Figure 1 is a simplistic graph of subjects and research areas of informetrics as an empirical information science.
  5. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.01
    0.008837775 = product of:
      0.01767555 = sum of:
        0.01767555 = product of:
          0.0353511 = sum of:
            0.0353511 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.0353511 = score(doc=5773,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Password. 2000, H.5, S.22-31
  6. Stock, W.G.: Hochschulmanagement, Information Appliances, Fairness als Grundsatz : Information und Mobilität (2002) 0.01
    0.008837775 = product of:
      0.01767555 = sum of:
        0.01767555 = product of:
          0.0353511 = sum of:
            0.0353511 = weight(_text_:22 in 1364) [ClassicSimilarity], result of:
              0.0353511 = score(doc=1364,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19345059 = fieldWeight in 1364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1364)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2003 19:39:36
  7. Stock, W.G.: Forschung im internationalen Vergleich - Wissenschaftsindikatoren auf Zitationsbasis : ISI Essential Science Indicators (2002) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 474) [ClassicSimilarity], result of:
              0.027226217 = score(doc=474,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=474)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bewertung wissenschaftlicher Forschungsergebnisse aus einer elektronischen Datenbank heraus? Rangordnungen der wichtigsten Institutionen, Wissenschaftler, Zeitschriften und sogar Länder in Fachdisziplinen nach Einfluss? Markierung "heißer", hochaktueller Artikel? Auflisten der hochzitierten Forschungsfronten in den einzelnen Wissenschaftsdisziplinen? Und das alles auf Knopfdruck und nicht mittels umständlicher szientometrischer Verfahren? Geht so etwas überhaupt? Es geht. Mit den "Essential Science Indicators" (ESI) legt das ISl ein webbasiertes Informationssystem zur Wissenschaftsevaluation vor, das einzigartige Ergebnisse präsentiert und in der Tat ausgesprochen einfach zu bedienen ist. Aber es geht, verglichen mit ausgeklügelten Methoden der empirischen Wissenschaftsforschung, nicht alles. Wo liegen die Grenzen des Systems? Wir werden die Arbeitsweise der ESI, seine Datenbasis, die eingesetzten informetrischen Algorithmen - und deren methodischen Probleme, die Suchoberfläche sowie die Ergebnisdarstellung skizzieren. Als Beispiel dienen uns Aspekte deutscher Forschung. Etwa: In welcher Disziplin haben Deutschlands Forscher den größten internationalen Einfluss? Welches deutsche Institut der Neurowissenschaften kann aufglobaler Ebene mitmischen? Oder: Welcher in Deutschland tätige Wissenschaftler führt eine disziplinspezifische Rangordnung an?Letztlich: Wer braucht die "Essential Science Indicators"? - Wir testeten die Essential Science Indicators Mitte Februar 2002 anhand der Version vom 1. Januar 2002, die das Zehn-Jahres-Intervall 1991 bis 2000 sowie die ersten zehn Monate aus 2001 berücksichtigt.