Search (52 results, page 1 of 3)

  • × theme_ss:"Citation indexing"
  1. Chan, H.C.; Kim, H.-W.; Tan, W.C.: Information systems citation patterns from International Conference on Information Systems articles (2006) 0.05
    0.04641738 = product of:
      0.13925214 = sum of:
        0.13925214 = sum of:
          0.09762162 = weight(_text_:conference in 201) [ClassicSimilarity], result of:
            0.09762162 = score(doc=201,freq=8.0), product of:
              0.19418365 = queryWeight, product of:
                3.7918143 = idf(docFreq=2710, maxDocs=44218)
                0.051211275 = queryNorm
              0.50272834 = fieldWeight in 201, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.7918143 = idf(docFreq=2710, maxDocs=44218)
                0.046875 = fieldNorm(doc=201)
          0.041630525 = weight(_text_:22 in 201) [ClassicSimilarity], result of:
            0.041630525 = score(doc=201,freq=2.0), product of:
              0.17933317 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051211275 = queryNorm
              0.23214069 = fieldWeight in 201, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=201)
      0.33333334 = coord(1/3)
    
    Abstract
    Research patterns could enhance understanding of the Information Systems (IS) field. Citation analysis is the methodology commonly used to determine such research patterns. In this study, the citation methodology is applied to one of the top-ranked Information Systems conferences - International Conference on Information Systems (ICIS). Information is extracted from papers in the proceedings of ICIS 2000 to 2002. A total of 145 base articles and 4,226 citations are used. Research patterns are obtained using total citations, citations per journal or conference, and overlapping citations. We then provide the citation ranking of journals and conferences. We also examine the difference between the citation ranking in this study and the ranking of IS journals and IS conferences in other studies. Based on the comparison, we confirm that IS research is a multidisciplinary research area. We also identify the most cited papers and authors in the IS research area, and the organizations most active in producing papers in the top-rated IS conference. We discuss the findings and implications of the study.
    Date
    3. 1.2007 17:22:03
  2. Gabel, J.: Improving information retrieval of subjects through citation-analysis : a study (2006) 0.04
    0.037964225 = product of:
      0.056946337 = sum of:
        0.036608502 = weight(_text_:retrieval in 225) [ClassicSimilarity], result of:
          0.036608502 = score(doc=225,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.23632148 = fieldWeight in 225, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=225)
        0.020337837 = product of:
          0.040675674 = sum of:
            0.040675674 = weight(_text_:conference in 225) [ClassicSimilarity], result of:
              0.040675674 = score(doc=225,freq=2.0), product of:
                0.19418365 = queryWeight, product of:
                  3.7918143 = idf(docFreq=2710, maxDocs=44218)
                  0.051211275 = queryNorm
                0.20947012 = fieldWeight in 225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7918143 = idf(docFreq=2710, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=225)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Citation-chasing is proposed as a method of discovering additional terms to enhance subjectsearch retrieval. Subjects attached to OCLC records for cited works are compared to those attached to original citing sources. Citing sources were produced via a subject-list search in a library catalog using the LCSH "Language and languages-Origin." A subject-search was employed to avoid subjectivity in choosing sources. References from the sources were searched in OCLC where applicable, and the subject headings were retrieved. The subjects were ranked by citation-frequency and tiered into 3 groups in a Bradford-like distribution. Highly cited subjects were produced that were not revealed through the original search. A difference in relative importance among the subjects was also revealed. Broad extra-linguistic topics like evolution are more prominent than specific linguistic topics like phonology. There are exceptions, which appear somewhat predictable by the amount of imbalance in citation-representation among the 2 sources. Citation leaders were also produced for authors and secondary-source titles.
    Source
    Knowledge organization for a global learning society: Proceedings of the 9th International ISKO Conference, 4-7 July 2006, Vienna, Austria. Hrsg.: G. Budin, C. Swertz u. K. Mitgutsch
  3. Mendez, A.: Some considerations on the retrieval of literature based on citations (1978) 0.03
    0.027611865 = product of:
      0.08283559 = sum of:
        0.08283559 = weight(_text_:retrieval in 778) [ClassicSimilarity], result of:
          0.08283559 = score(doc=778,freq=2.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.5347345 = fieldWeight in 778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=778)
      0.33333334 = coord(1/3)
    
  4. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.03
    0.025363116 = product of:
      0.076089345 = sum of:
        0.076089345 = weight(_text_:retrieval in 2290) [ClassicSimilarity], result of:
          0.076089345 = score(doc=2290,freq=12.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.49118498 = fieldWeight in 2290, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.33333334 = coord(1/3)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
  5. East, J.W.: Citations to conference papers and the implications for cataloging (1985) 0.03
    0.025110802 = product of:
      0.0753324 = sum of:
        0.0753324 = product of:
          0.1506648 = sum of:
            0.1506648 = weight(_text_:conference in 7928) [ClassicSimilarity], result of:
              0.1506648 = score(doc=7928,freq=14.0), product of:
                0.19418365 = queryWeight, product of:
                  3.7918143 = idf(docFreq=2710, maxDocs=44218)
                  0.051211275 = queryNorm
                0.77588826 = fieldWeight in 7928, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.7918143 = idf(docFreq=2710, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7928)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Problems in the cataloging of conference proceedings, and their treatment by some of the major cataloging codes, are briefly reviewed. To determine how conference papers are cited in the literature, and thus how researchers are likely to be seeking them in the catalog, fifty conference papers in the field of chemistry, delivered in 1970 and subsequently published, were searches in the Science Citation Index covering a ten-year period. The citations to the papers were examined to ascertain the implications of current citation practices for the cataloging of conference proceedings. The results suggest that conference proceedings are customarily cited like any other work of collective authorship and that the conference name is of little value as an access point
  6. Ahlgren, P.; Jarneving, B.; Rousseau, R.: Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient (2003) 0.02
    0.02305716 = product of:
      0.03458574 = sum of:
        0.020708898 = weight(_text_:retrieval in 5171) [ClassicSimilarity], result of:
          0.020708898 = score(doc=5171,freq=2.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.13368362 = fieldWeight in 5171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5171)
        0.013876842 = product of:
          0.027753685 = sum of:
            0.027753685 = weight(_text_:22 in 5171) [ClassicSimilarity], result of:
              0.027753685 = score(doc=5171,freq=2.0), product of:
                0.17933317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051211275 = queryNorm
                0.15476047 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5171)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ahlgren, Jarneving, and. Rousseau review accepted procedures for author co-citation analysis first pointing out that since in the raw data matrix the row and column values are identical i,e, the co-citation count of two authors, there is no clear choice for diagonal values. They suggest the number of times an author has been co-cited with himself excluding self citation rather than the common treatment as zeros or as missing values. When the matrix is converted to a similarity matrix the normal procedure is to create a matrix of Pearson's r coefficients between data vectors. Ranking by r and by co-citation frequency and by intuition can easily yield three different orders. It would seem necessary that the adding of zeros to the matrix will not affect the value or the relative order of similarity measures but it is shown that this is not the case with Pearson's r. Using 913 bibliographic descriptions form the Web of Science of articles form JASIS and Scientometrics, authors names were extracted, edited and 12 information retrieval authors and 12 bibliometric authors each from the top 100 most cited were selected. Co-citation and r value (diagonal elements treated as missing) matrices were constructed, and then reconstructed in expanded form. Adding zeros can both change the r value and the ordering of the authors based upon that value. A chi-squared distance measure would not violate these requirements, nor would the cosine coefficient. It is also argued that co-citation data is ordinal data since there is no assurance of an absolute zero number of co-citations, and thus Pearson is not appropriate. The number of ties in co-citation data make the use of the Spearman rank order coefficient problematic.
    Date
    9. 7.2006 10:22:35
  7. Yoon, L.L.: ¬The performance of cited references as an approach to information retrieval (1994) 0.02
    0.020923503 = product of:
      0.06277051 = sum of:
        0.06277051 = weight(_text_:retrieval in 8219) [ClassicSimilarity], result of:
          0.06277051 = score(doc=8219,freq=6.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.40520695 = fieldWeight in 8219, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8219)
      0.33333334 = coord(1/3)
    
    Abstract
    Explores the relationship between the number of cited references used in a citation search and retrieval effectiveness. Focuses on analysing in terms of information retrieval effectiveness, the overlap among posting sets retrieved by various combinations of cited references. Findings from three case studies show the more cited references used for a citation search, the better the performance, in terms of retrieving more relevant documents, up to a point of diminishing returns. The overall level of overlap among relevant documents sets was found to be low. If only some of the cited references among many candidates are used for a citation search, a significant proportion of relevant documents may be missed. The characteristics of cited references showed that some variables are good indicators to predict relevance to a given question
  8. Larsen, B.: Exploiting citation overlaps for information retrieval : generating a boomerang effect from the network of scientific papers (2002) 0.02
    0.020708898 = product of:
      0.062126692 = sum of:
        0.062126692 = weight(_text_:retrieval in 4175) [ClassicSimilarity], result of:
          0.062126692 = score(doc=4175,freq=2.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.40105087 = fieldWeight in 4175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4175)
      0.33333334 = coord(1/3)
    
  9. He, Y.; Hui, S.C.: PubSearch : a Web citation-based retrieval system (2001) 0.02
    0.020708898 = product of:
      0.062126692 = sum of:
        0.062126692 = weight(_text_:retrieval in 4806) [ClassicSimilarity], result of:
          0.062126692 = score(doc=4806,freq=8.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.40105087 = fieldWeight in 4806, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4806)
      0.33333334 = coord(1/3)
    
    Abstract
    Many scientific publications are now available on the World Wide Web for researchers to share research findings. However, they tend to be poorly organised, making the search of relevant publications difficult and time-consuming. Most existing search engines are ineffective in searching these publications, as they do not index Web publications that normally appear in PDF (portable document format) or PostScript formats. Proposes a Web citation-based retrieval system, known as PubSearch, for the retrieval of Web publications. PubSearch indexes Web publications based on citation indices and stores them into a Web Citation Database. The Web Citation Database is then mined to support publication retrieval. Apart from supporting the traditional cited reference search, PubSearch also provides document clustering search and author clustering search. Document clustering groups related publications into clusters, while author clustering categorizes authors into different research areas based on author co-citation analysis.
  10. Pao, M.L.: Term and citation retrieval : a field study (1993) 0.02
    0.019524537 = product of:
      0.058573607 = sum of:
        0.058573607 = weight(_text_:retrieval in 3741) [ClassicSimilarity], result of:
          0.058573607 = score(doc=3741,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.37811437 = fieldWeight in 3741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3741)
      0.33333334 = coord(1/3)
    
    Abstract
    Investigates the relative efficacy of searching by terms and by citations in searches collected in health science libraries. In pilot and field studies the odds that overlap items retrieved would be relevant or partially relevant were greatly improved. In the field setting citation searching was able to add average of 24% recall to traditional subject retrieval. Attempts to identify distinguishing characteristics in queries which might benefit most from additional citation searches proved inclusive. Online access of citation databases has been hampered by their high cost
  11. Shaw, W.M.: Subject and citation indexing : pt.2: the optimal, cluster-based retrieval performance of composite representations (1991) 0.02
    0.019524537 = product of:
      0.058573607 = sum of:
        0.058573607 = weight(_text_:retrieval in 4842) [ClassicSimilarity], result of:
          0.058573607 = score(doc=4842,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.37811437 = fieldWeight in 4842, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4842)
      0.33333334 = coord(1/3)
    
    Abstract
    Fortsetzung von pt.1: experimental retrieval results are presented as a function of the exhaustivity and similarity of the composite representations and reveal consistent patterns from which optimal performance levels can be identified. The optimal performance values provide an assessment of the absolute capacity of each composite representation to associate documents relevant to different queries in single-link hierarchies. The effectiveness of the exhaustive representation composed of references and citations is materially superior to the effectiveness of exhaustive composite representations that include subject descriptions
  12. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.018502457 = product of:
      0.05550737 = sum of:
        0.05550737 = product of:
          0.11101474 = sum of:
            0.11101474 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11101474 = score(doc=6091,freq=2.0), product of:
                0.17933317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051211275 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:53:22
  13. Døsen, K.: One more reference on self-reference (1992) 0.02
    0.018502457 = product of:
      0.05550737 = sum of:
        0.05550737 = product of:
          0.11101474 = sum of:
            0.11101474 = weight(_text_:22 in 4604) [ClassicSimilarity], result of:
              0.11101474 = score(doc=4604,freq=2.0), product of:
                0.17933317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051211275 = queryNorm
                0.61904186 = fieldWeight in 4604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4604)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    7. 2.2005 14:10:22
  14. Araújo, P.C. de; Gutierres Castanha, R.C.; Hjoerland, B.: Citation indexing and indexes (2021) 0.02
    0.017934434 = product of:
      0.0538033 = sum of:
        0.0538033 = weight(_text_:retrieval in 444) [ClassicSimilarity], result of:
          0.0538033 = score(doc=444,freq=6.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.34732026 = fieldWeight in 444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
      0.33333334 = coord(1/3)
    
    Abstract
    A citation index is a bibliographic database that provides citation links between documents. The first modern citation index was suggested by the researcher Eugene Garfield in 1955 and created by him in 1964, and it represents an important innovation to knowledge organization and information retrieval. This article describes citation indexes in general, considering the modern citation indexes, including Web of Science, Scopus, Google Scholar, Microsoft Academic, Crossref, Dimensions and some special citation indexes and predecessors to the modern citation index like Shepard's Citations. We present comparative studies of the major ones and survey theoretical problems related to the role of citation indexes as subject access points (SAP), recognizing the implications to knowledge organization and information retrieval. Finally, studies on citation behavior are presented and the influence of citation indexes on knowledge organization, information retrieval and the scientific information ecosystem is recognized.
  15. Garfield, E.: From citation indexes to informetrics : is the tail now wagging the dog? (1998) 0.02
    0.017083969 = product of:
      0.051251903 = sum of:
        0.051251903 = weight(_text_:retrieval in 2809) [ClassicSimilarity], result of:
          0.051251903 = score(doc=2809,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.33085006 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2809)
      0.33333334 = coord(1/3)
    
    Abstract
    Provides a synoptic review and history of citation indexes and their evolution into research evaluation tools including a discussion of the use of bibliometric data for evaluating US institutions (academic departments) by the National Research Council (NRC). Covers the origin and uses of periodical impact factors, validation studies of citation analysis, information retrieval and dissemination (current awareness), citation consciousness, historiography and science mapping, Citation Classics, and the history of contemporary science. Illustrates the retrieval of information by cited reference searching, especially as it applies to avoiding duplicated research. Discusses the 15 year cumulative impacts of periodicals and the percentage of uncitedness, the emergence of scientometrics, old boy networks, and citation frequency distributions. Concludes with observations about the future of citation indexing
  16. Cawkell, T.: Checking research progress on 'image retrieval by shape matching' using the Web of Science (1998) 0.02
    0.017083969 = product of:
      0.051251903 = sum of:
        0.051251903 = weight(_text_:retrieval in 3571) [ClassicSimilarity], result of:
          0.051251903 = score(doc=3571,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.33085006 = fieldWeight in 3571, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3571)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses the Web of Science database recently introduced by ISI, and which is compiled from 8.000 journals covered in the SCI, SSCI and AHCI. Briefly compares the database with the Citation Indexes as provided by the BIDS service at the University of Bath. Explores the characteristics and usefulness of the WoS through a search of it for articles on the topic of image retrieval by shape matching. Suggests that the selection of articles of interest is much easier and far quicker using the WoS than other methods of conducting a search using ISI's data
  17. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.016354015 = product of:
      0.049062043 = sum of:
        0.049062043 = product of:
          0.09812409 = sum of:
            0.09812409 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09812409 = score(doc=3925,freq=4.0), product of:
                0.17933317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051211275 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 15:22:28
  18. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.01
    0.014643403 = product of:
      0.043930206 = sum of:
        0.043930206 = weight(_text_:retrieval in 2288) [ClassicSimilarity], result of:
          0.043930206 = score(doc=2288,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.2835858 = fieldWeight in 2288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.33333334 = coord(1/3)
    
    Abstract
    A pilot study on the relative retrieval effectiveness of semantic relevance (by terms) and pragmatic relevance (by citations) is reported. A single database has been constructed to provide access by both descriptors and cited references. For each question from a set of queries, two equivalent sets were retrieved. All retrieved items were evaluated by subject experts for relevance to their originating queries. We conclude that there are essentially two types of relevance at work resulting in two different sets of documents. Using both search methods to create a union set is likely to increase recall. Those few retrieved by the intersection of the two methods tend to result in higher precision. Suggestions are made to develop a front-end system to display the overlapping items for higher precision and to manipulate and rank the union set sets retrieved by the two search modes for improved output
  19. Brooks, T.A.: How good are the best papers of JASIS? (2000) 0.01
    0.014643403 = product of:
      0.043930206 = sum of:
        0.043930206 = weight(_text_:retrieval in 4593) [ClassicSimilarity], result of:
          0.043930206 = score(doc=4593,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.2835858 = fieldWeight in 4593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4593)
      0.33333334 = coord(1/3)
    
    Abstract
    A citation analysis examined the 28 best articles published in JASIS from 1969-1996. Best articles tend to single-authored works twice as long as the avergae article published in JASIS. They are cited and self-cited much more often than the average article. The greatest source of references made to the best articles is from JASIS itself. The top 5 best papers focus largely on information retrieval and online searching
    Content
    Top by numbers of citations: (1) Saracevic, T. et al.: A study of information seeking and retrieving I-III (1988); (2) Bates, M.: Information search tactics (1979); (3) Cooper, W.S.: On selecting a measure of retrieval effectiveness (1973); (4) Marcus, R.S.: A experimental comparison of the effectiveness of computers and humans as search intermediaries (1983); (4) Fidel, R.: Online searching styles (1984)
  20. Tho, Q.T.; Hui, S.C.; Fong, A.C.M.: ¬A citation-based document retrieval system for finding research expertise (2007) 0.01
    0.014643403 = product of:
      0.043930206 = sum of:
        0.043930206 = weight(_text_:retrieval in 956) [ClassicSimilarity], result of:
          0.043930206 = score(doc=956,freq=4.0), product of:
            0.15490976 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.051211275 = queryNorm
            0.2835858 = fieldWeight in 956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=956)
      0.33333334 = coord(1/3)
    
    Abstract
    Current citation-based document retrieval systems generally offer only limited search facilities, such as author search. In order to facilitate more advanced search functions, we have developed a significantly improved system that employs two novel techniques: Context-based Cluster Analysis (CCA) and Context-based Ontology Generation frAmework (COGA). CCA aims to extract relevant information from clusters originally obtained from disparate clustering methods by building relationships between them. The built relationships are then represented as formal context using the Formal Concept Analysis (FCA) technique. COGA aims to generate ontology from clusters relationship built by CCA. By combining these two techniques, we are able to perform ontology learning from a citation database using clustering results. We have implemented the improved system and have demonstrated its use for finding research domain expertise. We have also conducted performance evaluation on the system and the results are encouraging.

Languages

  • e 46
  • d 6

Types

  • a 51
  • el 4
  • m 1
  • More… Less…