Search (192 results, page 1 of 10)

  • × theme_ss:"Citation indexing"
  1. Tho, Q.T.; Hui, S.C.; Fong, A.C.M.: ¬A citation-based document retrieval system for finding research expertise (2007) 0.01
    0.012943049 = product of:
      0.060400896 = sum of:
        0.032266766 = weight(_text_:system in 956) [ClassicSimilarity], result of:
          0.032266766 = score(doc=956,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.41757566 = fieldWeight in 956, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=956)
        0.0070881573 = weight(_text_:information in 956) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=956,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=956)
        0.021045974 = weight(_text_:retrieval in 956) [ClassicSimilarity], result of:
          0.021045974 = score(doc=956,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=956)
      0.21428572 = coord(3/14)
    
    Abstract
    Current citation-based document retrieval systems generally offer only limited search facilities, such as author search. In order to facilitate more advanced search functions, we have developed a significantly improved system that employs two novel techniques: Context-based Cluster Analysis (CCA) and Context-based Ontology Generation frAmework (COGA). CCA aims to extract relevant information from clusters originally obtained from disparate clustering methods by building relationships between them. The built relationships are then represented as formal context using the Formal Concept Analysis (FCA) technique. COGA aims to generate ontology from clusters relationship built by CCA. By combining these two techniques, we are able to perform ontology learning from a citation database using clustering results. We have implemented the improved system and have demonstrated its use for finding research domain expertise. We have also conducted performance evaluation on the system and the results are encouraging.
    Source
    Information processing and management. 43(2007) no.1, S.248-264
  2. Lin, X.; White, H.D.; Buzydlowski, J.: Real-time author co-citation mapping for online searching (2003) 0.01
    0.009938354 = product of:
      0.046378985 = sum of:
        0.022816047 = weight(_text_:system in 1080) [ClassicSimilarity], result of:
          0.022816047 = score(doc=1080,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.29527056 = fieldWeight in 1080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1080)
        0.008681185 = weight(_text_:information in 1080) [ClassicSimilarity], result of:
          0.008681185 = score(doc=1080,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.20156369 = fieldWeight in 1080, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1080)
        0.014881751 = weight(_text_:retrieval in 1080) [ClassicSimilarity], result of:
          0.014881751 = score(doc=1080,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.20052543 = fieldWeight in 1080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1080)
      0.21428572 = coord(3/14)
    
    Abstract
    Author searching is traditionally based on the matching of name strings. Special characteristics of authors as personal names and subject indicators are not considered. This makes it difficult to identify a set of related authors or to group authors by subjects in retrieval systems. In this paper, we describe the design and implementation of a prototype visualization system to enhance author searching. The system, called AuthorLink, is based on author co-citation analysis and visualization mapping algorithms such as Kohonen's feature maps and Pathfinder networks. AuthorLink produces interactive author maps in real time from a database of 1.26 million records supplied by the Institute for Scientific Information. The maps show subject groupings and more fine-grained intellectual connections among authors. Through the interactive interface the user can take advantage of such information to refine queries and retrieve documents through point-and-click manipulation of the authors' names.
    Source
    Information processing and management. 39(2003) no.5, S.689-706
  3. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.01
    0.0090410225 = product of:
      0.04219144 = sum of:
        0.016133383 = weight(_text_:system in 2288) [ClassicSimilarity], result of:
          0.016133383 = score(doc=2288,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.20878783 = fieldWeight in 2288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
        0.0050120843 = weight(_text_:information in 2288) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=2288,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 2288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
        0.021045974 = weight(_text_:retrieval in 2288) [ClassicSimilarity], result of:
          0.021045974 = score(doc=2288,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 2288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.21428572 = coord(3/14)
    
    Abstract
    A pilot study on the relative retrieval effectiveness of semantic relevance (by terms) and pragmatic relevance (by citations) is reported. A single database has been constructed to provide access by both descriptors and cited references. For each question from a set of queries, two equivalent sets were retrieved. All retrieved items were evaluated by subject experts for relevance to their originating queries. We conclude that there are essentially two types of relevance at work resulting in two different sets of documents. Using both search methods to create a union set is likely to increase recall. Those few retrieved by the intersection of the two methods tend to result in higher precision. Suggestions are made to develop a front-end system to display the overlapping items for higher precision and to manipulate and rank the union set sets retrieved by the two search modes for improved output
    Source
    Journal of the American Society for Information Science. 40(1989), S.226-235
  4. Mendez, A.: Some considerations on the retrieval of literature based on citations (1978) 0.01
    0.0075786044 = product of:
      0.053050227 = sum of:
        0.013365558 = weight(_text_:information in 778) [ClassicSimilarity], result of:
          0.013365558 = score(doc=778,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.3103276 = fieldWeight in 778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=778)
        0.03968467 = weight(_text_:retrieval in 778) [ClassicSimilarity], result of:
          0.03968467 = score(doc=778,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.5347345 = fieldWeight in 778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=778)
      0.14285715 = coord(2/14)
    
    Source
    Information scientist. 12(1978), S.67-71
  5. He, Y.; Hui, S.C.: PubSearch : a Web citation-based retrieval system (2001) 0.01
    0.0075113643 = product of:
      0.05257955 = sum of:
        0.022816047 = weight(_text_:system in 4806) [ClassicSimilarity], result of:
          0.022816047 = score(doc=4806,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.29527056 = fieldWeight in 4806, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4806)
        0.029763501 = weight(_text_:retrieval in 4806) [ClassicSimilarity], result of:
          0.029763501 = score(doc=4806,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 4806, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4806)
      0.14285715 = coord(2/14)
    
    Abstract
    Many scientific publications are now available on the World Wide Web for researchers to share research findings. However, they tend to be poorly organised, making the search of relevant publications difficult and time-consuming. Most existing search engines are ineffective in searching these publications, as they do not index Web publications that normally appear in PDF (portable document format) or PostScript formats. Proposes a Web citation-based retrieval system, known as PubSearch, for the retrieval of Web publications. PubSearch indexes Web publications based on citation indices and stores them into a Web Citation Database. The Web Citation Database is then mined to support publication retrieval. Apart from supporting the traditional cited reference search, PubSearch also provides document clustering search and author clustering search. Document clustering groups related publications into clusters, while author clustering categorizes authors into different research areas based on author co-citation analysis.
  6. Marion, L.S.; McCain, K.W.: Contrasting views of software engineering journals : author cocitation choices and indexer vocabulary assignments (2001) 0.01
    0.0068041594 = product of:
      0.031752743 = sum of:
        0.013444485 = weight(_text_:system in 5767) [ClassicSimilarity], result of:
          0.013444485 = score(doc=5767,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 5767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5767)
        0.005906798 = weight(_text_:information in 5767) [ClassicSimilarity], result of:
          0.005906798 = score(doc=5767,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13714671 = fieldWeight in 5767, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5767)
        0.012401459 = weight(_text_:retrieval in 5767) [ClassicSimilarity], result of:
          0.012401459 = score(doc=5767,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 5767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5767)
      0.21428572 = coord(3/14)
    
    Abstract
    We explore the intellectual subject structure and research themes in software engineering through the identification and analysis of a core journal literature. We examine this literature via two expert perspectives: that of the author, who identified significant work by citing it (journal cocitation analysis), and that of the professional indexer, who tags published work with subject terms to facilitate retrieval from a bibliographic database (subject profile analysis). The data sources are SCISEARCH (the on-line version of Science Citation Index), and INSPEC (a database covering software engineering, computer science, and information systems). We use data visualization tools (cluster analysis, multidimensional scaling, and PFNets) to show the "intellectual maps" of software engineering. Cocitation and subject profile analyses demonstrate that software engineering is a distinct interdisciplinary field, valuing practical and applied aspects, and spanning a subject continuum from "programming-in-the-smalI" to "programming-in-the-large." This continuum mirrors the software development life cycle by taking the operating system or major application from initial programming through project management, implementation, and maintenance. Object orientation is an integral but distinct subject area in software engineering. Key differences are the importance of management and programming: (1) cocitation analysis emphasizes project management and systems development; (2) programming techniques/languages are more influential in subject profiles; (3) cocitation profiles place object-oriented journals separately and centrally while the subject profile analysis locates these journals with the programming/languages group
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.4, S.297-308
  7. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.01
    0.0059235403 = product of:
      0.04146478 = sum of:
        0.0050120843 = weight(_text_:information in 2290) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=2290,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
        0.036452696 = weight(_text_:retrieval in 2290) [ClassicSimilarity], result of:
          0.036452696 = score(doc=2290,freq=12.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.49118498 = fieldWeight in 2290, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.14285715 = coord(2/14)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
    Source
    Journal of the American Society for Information Science. 40(1989), S.110-114
  8. Yoon, L.L.: ¬The performance of cited references as an approach to information retrieval (1994) 0.01
    0.005742856 = product of:
      0.040199988 = sum of:
        0.010128049 = weight(_text_:information in 8219) [ClassicSimilarity], result of:
          0.010128049 = score(doc=8219,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 8219, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8219)
        0.03007194 = weight(_text_:retrieval in 8219) [ClassicSimilarity], result of:
          0.03007194 = score(doc=8219,freq=6.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40520695 = fieldWeight in 8219, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8219)
      0.14285715 = coord(2/14)
    
    Abstract
    Explores the relationship between the number of cited references used in a citation search and retrieval effectiveness. Focuses on analysing in terms of information retrieval effectiveness, the overlap among posting sets retrieved by various combinations of cited references. Findings from three case studies show the more cited references used for a citation search, the better the performance, in terms of retrieving more relevant documents, up to a point of diminishing returns. The overall level of overlap among relevant documents sets was found to be low. If only some of the cited references among many candidates are used for a citation search, a significant proportion of relevant documents may be missed. The characteristics of cited references showed that some variables are good indicators to predict relevance to a given question
    Source
    Journal of the American Society for Information Science. 45(1994) no.5, S.287-299
  9. Nicolaisen, J.: Citation analysis (2007) 0.01
    0.0057082702 = product of:
      0.03995789 = sum of:
        0.013365558 = weight(_text_:information in 6091) [ClassicSimilarity], result of:
          0.013365558 = score(doc=6091,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.3103276 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
        0.026592331 = product of:
          0.053184662 = sum of:
            0.053184662 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.053184662 = score(doc=6091,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    13. 7.2008 19:53:22
    Source
    Annual review of information science and technology. 41(2007), S.xxx-xxx
  10. Larsen, B.: Exploiting citation overlaps for information retrieval : generating a boomerang effect from the network of scientific papers (2002) 0.01
    0.005683953 = product of:
      0.03978767 = sum of:
        0.0100241685 = weight(_text_:information in 4175) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=4175,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 4175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4175)
        0.029763501 = weight(_text_:retrieval in 4175) [ClassicSimilarity], result of:
          0.029763501 = score(doc=4175,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 4175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4175)
      0.14285715 = coord(2/14)
    
  11. Sidiropoulos, A.; Manolopoulos, Y.: ¬A new perspective to automatically rank scientific conferences using digital libraries (2005) 0.01
    0.005622132 = product of:
      0.039354924 = sum of:
        0.032266766 = weight(_text_:system in 1011) [ClassicSimilarity], result of:
          0.032266766 = score(doc=1011,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.41757566 = fieldWeight in 1011, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1011)
        0.0070881573 = weight(_text_:information in 1011) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=1011,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 1011, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1011)
      0.14285715 = coord(2/14)
    
    Abstract
    Citation analysis is performed in order to evaluate authors and scientific collections, such as journals and conference proceedings. Currently, two major systems exist that perform citation analysis: Science Citation Index (SCI) by the Institute for Scientific Information (ISI) and CiteSeer by the NEC Research Institute. The SCI, mostly a manual system up until recently, is based on the notion of the ISI Impact Factor, which has been used extensively for citation analysis purposes. On the other hand the CiteSeer system is an automatically built digital library using agents technology, also based on the notion of ISI Impact Factor. In this paper, we investigate new alternative notions besides the ISI impact factor, in order to provide a novel approach aiming at ranking scientific collections. Furthermore, we present a web-based system that has been built by extracting data from the Databases and Logic Programming (DBLP) website of the University of Trier. Our system, by using the new citation metrics, emerges as a useful tool for ranking scientific collections. In this respect, some first remarks are presented, e.g. on ranking conferences related to databases.
    Source
    Information processing and management. 41(2005) no.2, S.289-312
  12. Scharnhorst, A.: Citation - networks, science landscapes and evolutionary strategies (1998) 0.01
    0.005492654 = product of:
      0.038448576 = sum of:
        0.032601144 = weight(_text_:system in 5126) [ClassicSimilarity], result of:
          0.032601144 = score(doc=5126,freq=6.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.42190298 = fieldWeight in 5126, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5126)
        0.0058474317 = weight(_text_:information in 5126) [ClassicSimilarity], result of:
          0.0058474317 = score(doc=5126,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13576832 = fieldWeight in 5126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5126)
      0.14285715 = coord(2/14)
    
    Abstract
    The construction of virtual science landscapes based on citation networks and the strategic use of the information therein shed new light on the issues of the evolution of the science system and possibilities for control. Leydesdorff's approach to citation theory described in his 1998 article (see this issue of LISA) takes into account the dual layered character of communication networks and the second order nature of the science system. This perspective may help to sharpen the awareness of scientists and science policy makers for possible feedback loops within actions and activities in the science system, and probably nonlinear phenomena resulting therefrom. Sketches an additional link to geometrically oriented evolutionary theories and uses a specific landscape concept as a framework for some comments
  13. Lai, K.-K.; Wu, S.-J.: Using the patent co-citation approach to establish a new patent classification system (2005) 0.01
    0.0053012664 = product of:
      0.037108865 = sum of:
        0.03293213 = weight(_text_:system in 1013) [ClassicSimilarity], result of:
          0.03293213 = score(doc=1013,freq=12.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.42618635 = fieldWeight in 1013, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1013)
        0.004176737 = weight(_text_:information in 1013) [ClassicSimilarity], result of:
          0.004176737 = score(doc=1013,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 1013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1013)
      0.14285715 = coord(2/14)
    
    Abstract
    The paper proposes a new approach to create a patent classification system to replace the IPC or UPC system for conducting patent analysis and management. The new approach is based on co-citation analysis of bibliometrics. The traditional approach for management of patents, which is based on either the IPC or UPC, is too general to meet the needs of specific industries. In addition, some patents are placed in incorrect categories, making it difficult for enterprises to carry out R&D planning, technology positioning, patent strategy-making and technology forecasting. Therefore, it is essential to develop a patent classification system that is adaptive to the characteristics of a specific industry. The analysis of this approach is divided into three phases. Phase I selects appropriate databases to conduct patent searches according to the subject and objective of this study and then select basic patents. Phase II uses the co-cited frequency of the basic patent pairs to assess their similarity. Phase III uses factor analysis to establish a classification system and assess the efficiency of the proposed approach. The main contribution of this approach is to develop a patent classification system based on patent similarities to assist patent manager in understanding the basic patents for a specific industry, the relationships among categories of technologies and the evolution of a technology category.
    Source
    Information processing and management. 41(2005) no.2, S.313-330
  14. Araújo, P.C. de; Gutierres Castanha, R.C.; Hjoerland, B.: Citation indexing and indexes (2021) 0.01
    0.005114303 = product of:
      0.035800118 = sum of:
        0.0100241685 = weight(_text_:information in 444) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=444,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 444, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
        0.025775949 = weight(_text_:retrieval in 444) [ClassicSimilarity], result of:
          0.025775949 = score(doc=444,freq=6.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.34732026 = fieldWeight in 444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=444)
      0.14285715 = coord(2/14)
    
    Abstract
    A citation index is a bibliographic database that provides citation links between documents. The first modern citation index was suggested by the researcher Eugene Garfield in 1955 and created by him in 1964, and it represents an important innovation to knowledge organization and information retrieval. This article describes citation indexes in general, considering the modern citation indexes, including Web of Science, Scopus, Google Scholar, Microsoft Academic, Crossref, Dimensions and some special citation indexes and predecessors to the modern citation index like Shepard's Citations. We present comparative studies of the major ones and survey theoretical problems related to the role of citation indexes as subject access points (SAP), recognizing the implications to knowledge organization and information retrieval. Finally, studies on citation behavior are presented and the influence of citation indexes on knowledge organization, information retrieval and the scientific information ecosystem is recognized.
  15. Pao, M.L.: Term and citation retrieval : a field study (1993) 0.00
    0.00496344 = product of:
      0.034744076 = sum of:
        0.006682779 = weight(_text_:information in 3741) [ClassicSimilarity], result of:
          0.006682779 = score(doc=3741,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 3741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3741)
        0.028061297 = weight(_text_:retrieval in 3741) [ClassicSimilarity], result of:
          0.028061297 = score(doc=3741,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 3741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3741)
      0.14285715 = coord(2/14)
    
    Abstract
    Investigates the relative efficacy of searching by terms and by citations in searches collected in health science libraries. In pilot and field studies the odds that overlap items retrieved would be relevant or partially relevant were greatly improved. In the field setting citation searching was able to add average of 24% recall to traditional subject retrieval. Attempts to identify distinguishing characteristics in queries which might benefit most from additional citation searches proved inclusive. Online access of citation databases has been hampered by their high cost
    Source
    Information processing and management. 29(1993) no.1, S.95-112
  16. Shaw, W.M.: Subject and citation indexing : pt.2: the optimal, cluster-based retrieval performance of composite representations (1991) 0.00
    0.00496344 = product of:
      0.034744076 = sum of:
        0.006682779 = weight(_text_:information in 4842) [ClassicSimilarity], result of:
          0.006682779 = score(doc=4842,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 4842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4842)
        0.028061297 = weight(_text_:retrieval in 4842) [ClassicSimilarity], result of:
          0.028061297 = score(doc=4842,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 4842, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4842)
      0.14285715 = coord(2/14)
    
    Abstract
    Fortsetzung von pt.1: experimental retrieval results are presented as a function of the exhaustivity and similarity of the composite representations and reveal consistent patterns from which optimal performance levels can be identified. The optimal performance values provide an assessment of the absolute capacity of each composite representation to associate documents relevant to different queries in single-link hierarchies. The effectiveness of the exhaustive representation composed of references and citations is materially superior to the effectiveness of exhaustive composite representations that include subject descriptions
    Source
    Journal of the American Society for Information Science. 42(1991) no.9, S.676-684
  17. Garfield, E.: From citation indexes to informetrics : is the tail now wagging the dog? (1998) 0.00
    0.004689022 = product of:
      0.032823153 = sum of:
        0.008269517 = weight(_text_:information in 2809) [ClassicSimilarity], result of:
          0.008269517 = score(doc=2809,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1920054 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2809)
        0.024553634 = weight(_text_:retrieval in 2809) [ClassicSimilarity], result of:
          0.024553634 = score(doc=2809,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.33085006 = fieldWeight in 2809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2809)
      0.14285715 = coord(2/14)
    
    Abstract
    Provides a synoptic review and history of citation indexes and their evolution into research evaluation tools including a discussion of the use of bibliometric data for evaluating US institutions (academic departments) by the National Research Council (NRC). Covers the origin and uses of periodical impact factors, validation studies of citation analysis, information retrieval and dissemination (current awareness), citation consciousness, historiography and science mapping, Citation Classics, and the history of contemporary science. Illustrates the retrieval of information by cited reference searching, especially as it applies to avoiding duplicated research. Discusses the 15 year cumulative impacts of periodicals and the percentage of uncitedness, the emergence of scientometrics, old boy networks, and citation frequency distributions. Concludes with observations about the future of citation indexing
  18. Ahlgren, P.; Jarneving, B.; Rousseau, R.: Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient (2003) 0.00
    0.0045631477 = product of:
      0.021294689 = sum of:
        0.0047254385 = weight(_text_:information in 5171) [ClassicSimilarity], result of:
          0.0047254385 = score(doc=5171,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.10971737 = fieldWeight in 5171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5171)
        0.009921167 = weight(_text_:retrieval in 5171) [ClassicSimilarity], result of:
          0.009921167 = score(doc=5171,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.13368362 = fieldWeight in 5171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5171)
        0.0066480828 = product of:
          0.0132961655 = sum of:
            0.0132961655 = weight(_text_:22 in 5171) [ClassicSimilarity], result of:
              0.0132961655 = score(doc=5171,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.15476047 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5171)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Ahlgren, Jarneving, and. Rousseau review accepted procedures for author co-citation analysis first pointing out that since in the raw data matrix the row and column values are identical i,e, the co-citation count of two authors, there is no clear choice for diagonal values. They suggest the number of times an author has been co-cited with himself excluding self citation rather than the common treatment as zeros or as missing values. When the matrix is converted to a similarity matrix the normal procedure is to create a matrix of Pearson's r coefficients between data vectors. Ranking by r and by co-citation frequency and by intuition can easily yield three different orders. It would seem necessary that the adding of zeros to the matrix will not affect the value or the relative order of similarity measures but it is shown that this is not the case with Pearson's r. Using 913 bibliographic descriptions form the Web of Science of articles form JASIS and Scientometrics, authors names were extracted, edited and 12 information retrieval authors and 12 bibliometric authors each from the top 100 most cited were selected. Co-citation and r value (diagonal elements treated as missing) matrices were constructed, and then reconstructed in expanded form. Adding zeros can both change the r value and the ordering of the authors based upon that value. A chi-squared distance measure would not violate these requirements, nor would the cosine coefficient. It is also argued that co-citation data is ordinal data since there is no assurance of an absolute zero number of co-citations, and thus Pearson is not appropriate. The number of ties in co-citation data make the use of the Spearman rank order coefficient problematic.
    Date
    9. 7.2006 10:22:35
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.6, S.549-568
  19. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.00
    0.0045511425 = product of:
      0.031857997 = sum of:
        0.008353474 = weight(_text_:information in 3925) [ClassicSimilarity], result of:
          0.008353474 = score(doc=3925,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.19395474 = fieldWeight in 3925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
        0.023504522 = product of:
          0.047009043 = sum of:
            0.047009043 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.047009043 = score(doc=3925,freq=4.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    22. 7.2006 15:22:28
    Source
    Information Research. 6(2001), no.2
  20. Brooks, T.A.: How good are the best papers of JASIS? (2000) 0.00
    0.004438592 = product of:
      0.031070143 = sum of:
        0.0100241685 = weight(_text_:information in 4593) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=4593,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 4593, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4593)
        0.021045974 = weight(_text_:retrieval in 4593) [ClassicSimilarity], result of:
          0.021045974 = score(doc=4593,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 4593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4593)
      0.14285715 = coord(2/14)
    
    Abstract
    A citation analysis examined the 28 best articles published in JASIS from 1969-1996. Best articles tend to single-authored works twice as long as the avergae article published in JASIS. They are cited and self-cited much more often than the average article. The greatest source of references made to the best articles is from JASIS itself. The top 5 best papers focus largely on information retrieval and online searching
    Content
    Top by numbers of citations: (1) Saracevic, T. et al.: A study of information seeking and retrieving I-III (1988); (2) Bates, M.: Information search tactics (1979); (3) Cooper, W.S.: On selecting a measure of retrieval effectiveness (1973); (4) Marcus, R.S.: A experimental comparison of the effectiveness of computers and humans as search intermediaries (1983); (4) Fidel, R.: Online searching styles (1984)
    Source
    Journal of the American Society for Information Science. 51(2000) no.5, S.485-486

Years

Languages

  • e 179
  • d 11
  • chi 2
  • More… Less…

Types

  • a 186
  • el 5
  • m 5
  • s 2
  • More… Less…