Search (6 results, page 1 of 1)

  • × author_ss:"White, H.D."
  1. White, H.D.; Bates, M.J.; Wilson, P.: For information specialists : interpretations of reference and bibliographic work (1992) 0.04
    0.037008584 = product of:
      0.11102575 = sum of:
        0.11102575 = weight(_text_:reference in 7742) [ClassicSimilarity], result of:
          0.11102575 = score(doc=7742,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.5393946 = fieldWeight in 7742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.09375 = fieldNorm(doc=7742)
      0.33333334 = coord(1/3)
    
  2. Buzydlowski, J.W.; White, H.D.; Lin, X.: Term Co-occurrence Analysis as an Interface for Digital Libraries (2002) 0.02
    0.02374556 = product of:
      0.07123668 = sum of:
        0.07123668 = product of:
          0.14247335 = sum of:
            0.14247335 = weight(_text_:22 in 1339) [ClassicSimilarity], result of:
              0.14247335 = score(doc=1339,freq=6.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.804159 = fieldWeight in 1339, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1339)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:16:22
  3. White, H.D.: Author cocitation analysis and pearson's r (2003) 0.02
    0.015420245 = product of:
      0.046260733 = sum of:
        0.046260733 = weight(_text_:reference in 2119) [ClassicSimilarity], result of:
          0.046260733 = score(doc=2119,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 2119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2119)
      0.33333334 = coord(1/3)
    
    Abstract
    In their article "Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient," Ahlgren, Jarneving, and Rousseau fault traditional author cocitation analysis (ACA) for using Pearson's r as a measure of similarity between authors because it fails two tests of stability of measurement. The instabilities arise when rs are recalculated after a first coherent group of authors has been augmented by a second coherent group with whom the first has little or no cocitation. However, AJ&R neither cluster nor map their data to demonstrate how fluctuations in rs will mislead the analyst, and the problem they pose is remote from both theory and practice in traditional ACA. By entering their own rs into multidimensional scaling and clustering routines, I show that, despite r's fluctuations, clusters based an it are much the same for the combined groups as for the separate groups. The combined groups when mapped appear as polarized clumps of points in two-dimensional space, confirming that differences between the groups have become much more important than differences within the groups-an accurate portrayal of what has happened to the data. Moreover, r produces clusters and maps very like those based an other coefficients that AJ&R mention as possible replacements, such as a cosine similarity measure or a chi square dissimilarity measure. Thus, r performs well enough for the purposes of ACA. Accordingly, I argue that qualitative information revealing why authors are cocited is more important than the cautions proposed in the AJ&R critique. I include notes an topics such as handling the diagonal in author cocitation matrices, lognormalizing data, and testing r for significance.
  4. White, H.D.: Combining bibliometrics, information retrieval, and relevance theory : part 2: some implications for information science (2007) 0.02
    0.015420245 = product of:
      0.046260733 = sum of:
        0.046260733 = weight(_text_:reference in 437) [ClassicSimilarity], result of:
          0.046260733 = score(doc=437,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=437)
      0.33333334 = coord(1/3)
    
    Abstract
    When bibliometric data are converted to term frequency (tf) and inverse document frequency (idf) values, plotted as pennant diagrams, and interpreted according to Sperber and Wilson's relevance theory (RT), the results evoke major variables of information science (IS). These include topicality, in the sense of intercohesion and intercoherence among texts; cognitive effects of texts in response to people's questions; people's levels of expertise as a precondition for cognitive effects; processing effort as textual or other messages are received; specificity of terms as it affects processing effort; relevance, defined in RT as the effects/effort ratio; and authority of texts and their authors. While such concerns figure automatically in dialogues between people, they become problematic when people create or use or judge literature-based information systems. The difficulty of achieving worthwhile cognitive effects and acceptable processing effort in human-system dialogues explains why relevance is the central concern of IS. Moreover, since relevant communication with both systems and unfamiliar people is uncertain, speakers tend to seek cognitive effects that cost them the least effort. Yet hearers need greater effort, often greater specificity, from speakers if their responses are to be highly relevant in their turn. This theme of mismatch manifests itself in vague reference questions, underdeveloped online searches, uncreative judging in retrieval evaluation trials, and perfunctory indexing. Another effect of least effort is a bias toward topical relevance over other kinds. RT can explain these outcomes as well as more adaptive ones. Pennant diagrams, applied here to a literature search and a Bradford-style journal analysis, can model them. Given RT and the right context, bibliometrics may predict psychometrics.
  5. White, H.D.: Relevance in theory (2009) 0.01
    0.012336196 = product of:
      0.037008587 = sum of:
        0.037008587 = weight(_text_:reference in 3872) [ClassicSimilarity], result of:
          0.037008587 = score(doc=3872,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 3872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=3872)
      0.33333334 = coord(1/3)
    
    Abstract
    Relevance is the central concept in information science because of its salience in designing and evaluating literature-based answering systems. It is salient when users seek information through human intermediaries, such as reference librarians, but becomes even more so when systems are automated and users must navigate them on their own. Designers of classic precomputer systems of the nineteenth and twentieth centuries appear to have been no less concerned with relevance than the information scientists of today. The concept has, however, proved difficult to define and operationalize. A common belief is that it is a relation between a user's request for information and the documents the system retrieves in response. Documents might be considered retrieval-worthy because they: 1) constitute evidence for or against a claim; 2) answer a question; or 3) simply match the request in topic. In practice, literature-based answering makes use of term-matching technology, and most evaluation of relevance has involved topical match as the primary criterion for acceptability. The standard table for evaluating the relation of retrieved documents to a request has only the values "relevant" and "not relevant," yet many analysts hold that relevance admits of degrees. Moreover, many analysts hold that users decide relevance on more dimensions than topical match. Who then can validly judge relevance? Is it only the person who put the request and who can evaluate a document on multiple dimensions? Or can surrogate judges perform this function on the basis of topicality? Such questions arise in a longstanding debate on whether relevance is objective or subjective. One proposal has been to reframe the debate in terms of relevance theory (imported from linguistic pragmatics), which makes relevance increase with a document's valuable cognitive effects and decrease with the effort needed to process it. This notion allows degree of topical match to contribute to relevance but allows other considerations to contribute as well. Since both cognitive effects and processing effort will differ across users, they can be taken as subjective, but users' decisions can also be objectively evaluated if the logic behind them is made explicit. Relevance seems problematical because the considerations that lead people to accept documents in literature searches, or to use them later in contexts such as citation, are seldom fully revealed. Once they are revealed, relevance may be seen as not only multidimensional and dynamic, but also understandable.
  6. Lin, X.; White, H.D.; Buzydlowski, J.: Real-time author co-citation mapping for online searching (2003) 0.01
    0.009134606 = product of:
      0.027403818 = sum of:
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 1080) [ClassicSimilarity], result of:
              0.054807637 = score(doc=1080,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Author searching is traditionally based on the matching of name strings. Special characteristics of authors as personal names and subject indicators are not considered. This makes it difficult to identify a set of related authors or to group authors by subjects in retrieval systems. In this paper, we describe the design and implementation of a prototype visualization system to enhance author searching. The system, called AuthorLink, is based on author co-citation analysis and visualization mapping algorithms such as Kohonen's feature maps and Pathfinder networks. AuthorLink produces interactive author maps in real time from a database of 1.26 million records supplied by the Institute for Scientific Information. The maps show subject groupings and more fine-grained intellectual connections among authors. Through the interactive interface the user can take advantage of such information to refine queries and retrieve documents through point-and-click manipulation of the authors' names.