Search (6 results, page 1 of 1)

  • × author_ss:"Schneider, J.W."
  1. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.03
    0.025941458 = product of:
      0.051882915 = sum of:
        0.051882915 = sum of:
          0.008202582 = weight(_text_:a in 156) [ClassicSimilarity], result of:
            0.008202582 = score(doc=156,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1544581 = fieldWeight in 156, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.043680333 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.043680333 = score(doc=156,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.5 = coord(1/2)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
    Type
    a
  2. Schneider, J.W.: Emerging frameworks and methods : The Fourth International Conference on Conceptions of Library and Information Science (CoLIS4), The Information School, University of Washington, Seattle, Washington, USA, July 21-25, 2002 (2002) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 1857) [ClassicSimilarity], result of:
              0.009374379 = score(doc=1857,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 1857, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1857)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Bericht über die Tagung und Kurzreferate zu den 18 Beiträgen (u.a. BELKIN, N.J.: A classification of interactions with information; INGWERSEN, P.: Cognitive perspectives of document representation; HJOERLAND, B.: Principia informatica: foundational theory of the concepts of information and principles of information services; TUOMINEN, K. u.a.: Discourse, cognition and reality: towards a social constructionist meta-theory for library and information science
    Type
    a
  3. Schneider, J.W.; Costas, R.: Identifying potential "breakthrough" publications using refined citation analyses : three related explorative approaches (2017) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3436) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3436,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3436, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The article presents three advanced citation-based methods used to detect potential breakthrough articles among very highly cited articles. We approach the detection of such articles from three different perspectives in order to provide different typologies of breakthrough articles. In all three cases we use the hierarchical classification of scientific publications developed at CWTS based on direct citation relationships. We assume that such contextualized articles focus on similar research interests. We utilize the characteristics scores and scales (CSS) approach to partition citation distributions and implement a specific filtering algorithm to sort out potential highly-cited "followers," articles not considered breakthroughs. After invoking thresholds and filtering, three methods are explored: A very exclusive one where only the highest cited article in a micro-cluster is considered as a potential breakthrough article (M1); as well as two conceptually different methods, one that detects potential breakthrough articles among the 2% highest cited articles according to CSS (M2a), and finally a more restrictive version where, in addition to the CSS 2% filter, knowledge diffusion is also considered (M2b). The advance citation-based methods are explored and evaluated using validated publication sets linked to different Danish funding instruments including centers of excellence.
    Type
    a
  4. Schneider, J.W.; Borlund, P.: Introduction to bibliometrics for construction and maintenance of thesauri : methodical considerations (2004) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 4423) [ClassicSimilarity], result of:
              0.007030784 = score(doc=4423,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 4423, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4423)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper introduces bibliometrics to the research area of knowledge organization - more precisely in relation to construction and maintenance of thesauri. As such, the paper reviews related work that has been of inspiration for the assembly of a semi-automatic, bibliometric-based, approach for construction and maintenance. Similarly, the paper discusses the methodical considerations behind the approach. Eventually, the semi-automatic approach is used to verify the applicability of bibliometric methods as a supplement to construction and maintenance of thesauri. In the context of knowledge organization, the paper outlines two fundamental approaches to knowledge organization, that is, the manual intellectual approach and the automatic algorithmic approach. Bibliometric methods belong to the automatic algorithmic approach, though bibliometrics do have special characteristics that are substantially different from other methods within this approach.
    Type
    a
  5. Schneider, J.W.; Borlund, P.: Matrix comparison, part 2 : measuring the resemblance between proximity measures or ordination results by use of the mantel and procrustes statistics (2007) 0.00
    0.0016571716 = product of:
      0.0033143433 = sum of:
        0.0033143433 = product of:
          0.0066286866 = sum of:
            0.0066286866 = weight(_text_:a in 582) [ClassicSimilarity], result of:
              0.0066286866 = score(doc=582,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12482099 = fieldWeight in 582, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, matrix generation, and the composition of proximity measures, are introduced and discussed. In this second part, the authors introduce and thoroughly demonstrate two related matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. In common with these techniques is the application of permutation procedures to test hypotheses about matrix resemblances. The choice of technique is related to the validation at hand. In the case of the Mantel test, the degree of resemblance between two measures forecast their potentially different affect upon ordination and clustering results. In principle, two proximity measures with a very strong resemblance most likely produce identical results, thus, choice of measure between the two becomes less important. Alternatively, or as a supplement, Procrustes analysis compares the actual ordination results without investigating the underlying proximity measures, by matching two configurations of the same objects in a multidimensional space. An advantage of the Procrustes analysis though, is the graphical solution provided by the superimposition plot and the resulting decomposition of variance components. Accordingly, the Procrustes analysis provides not only a measure of general fit between configurations, but also values for individual objects enabling more elaborate validations. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
    Type
    a
  6. Schneider, J.W.; Borlund, P.: Matrix comparison, part 1 : motivation and important issues for measuring the resemblance between proximity measures or ordination results (2007) 0.00
    0.0015127839 = product of:
      0.0030255679 = sum of:
        0.0030255679 = product of:
          0.0060511357 = sum of:
            0.0060511357 = weight(_text_:a in 584) [ClassicSimilarity], result of:
              0.0060511357 = score(doc=584,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11394546 = fieldWeight in 584, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=584)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means of evaluation in informetric studies such as cocitation analysis. In this first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, are introduced and discussed. The motivation is spurred by the recent debate on choice of proximity measures and their potential influence upon clustering and ordination results. The two important issues discussed here are matrix generation and the composition of proximity measures. The approach to matrix generation is demonstrated for the same data set, i.e., how data is represented and transformed in a matrix, evidently determines the behavior of proximity measures. Two different matrix generation approaches, in all probability, will lead to different proximity rankings of objects, which further lead to different ordination and clustering results for the same set of objects. Further, a resemblance in the composition of formulas indicates whether two proximity measures may produce similar ordination and clustering results. However, as shown in the case of the angular correlation and cosine measures, a small deviation in otherwise similar formulas can lead to different rankings depending on the contour of the data matrix transformed. Eventually, the behavior of proximity measures, that is whether they produce similar rankings of objects, is more or less data-specific. Consequently, the authors recommend the use of empirical matrix comparison techniques for individual studies to investigate the degree of resemblance between proximity measures or their ordination results. In part two of the article, the authors introduce and demonstrate two related statistical matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
    Type
    a