Search (75 results, page 4 of 4)

  • × theme_ss:"Indexierungsstudien"
  1. Burgin, R.: ¬The effect of indexing exhaustivity on retrieval performance (1991) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 5262) [ClassicSimilarity], result of:
          0.009274333 = score(doc=5262,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 5262, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5262)
      0.16666667 = coord(1/6)
    
    Abstract
    The study was based on the collection examnined by W.H. Shaw (Inf. proc. man. 26(1990) no.6, S.693-703, 705-718), a test collection of 1239 articles, indexed with the term cystic fibrosis; and 100 queries with 3 sets of relevance evaluations from subject experts. The effect of variations in indexing exhaustivity on retrieval performance in a vector space retrieval system was investigated by using a term weight threshold to construct different document representations for a test collection. Retrieval results showed that retrieval performance, as measured by the mean optimal measure for all queries at a term weight threshold, was highest at the most exhaustive representation, and decreased slightly as terms were eliminated and the indexing representation became less exhaustive. The findings suggest that the vector space model is more robust against variations in indexing exhaustivity that is the single-link clustering model
  2. McCarthy, C.: ¬The realibility factor in subject access (1986) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 2271) [ClassicSimilarity], result of:
          0.008924231 = score(doc=2271,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 2271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2271)
      0.16666667 = coord(1/6)
    
  3. Wolfram, D.; Zhang, J.: ¬An investigation of the influence of indexing exhaustivity and term distributions on a document space (2002) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 5238) [ClassicSimilarity], result of:
          0.008924231 = score(doc=5238,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 5238, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5238)
      0.16666667 = coord(1/6)
    
    Abstract
    Wolfram and Zhang are interested in the effect of different indexing exhaustivity, by which they mean the number of terms chosen, and of different index term distributions and different term weighting methods on the resulting document cluster organization. The Distance Angle Retrieval Environment, DARE, which provides a two dimensional display of retrieved documents was used to represent the document clusters based upon a document's distance from the searcher's main interest, and on the angle formed by the document, a point representing a minor interest, and the point representing the main interest. If the centroid and the origin of the document space are assigned as major and minor points the average distance between documents and the centroid can be measured providing an indication of cluster organization. in the form of a size normalized similarity measure. Using 500 records from NTIS and nine models created by intersecting low, observed, and high exhaustivity levels (based upon a negative binomial distribution) with shallow, observed, and steep term distributions (based upon a Zipf distribution) simulation runs were preformed using inverse document frequency, inter-document term frequency, and inverse document frequency based upon both inter and intra-document frequencies. Low exhaustivity and shallow distributions result in a more dense document space and less effective retrieval. High exhaustivity and steeper distributions result in a more diffuse space.
  4. Lee, D.H.; Schleyer, T.: Social tagging is no substitute for controlled indexing : a comparison of Medical Subject Headings and CiteULike tags assigned to 231,388 papers (2012) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 383) [ClassicSimilarity], result of:
          0.008924231 = score(doc=383,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 383, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=383)
      0.16666667 = coord(1/6)
    
    Abstract
    Social tagging and controlled indexing both facilitate access to information resources. Given the increasing popularity of social tagging and the limitations of controlled indexing (primarily cost and scalability), it is reasonable to investigate to what degree social tagging could substitute for controlled indexing. In this study, we compared CiteULike tags to Medical Subject Headings (MeSH) terms for 231,388 citations indexed in MEDLINE. In addition to descriptive analyses of the data sets, we present a paper-by-paper analysis of tags and MeSH terms: the number of common annotations, Jaccard similarity, and coverage ratio. In the analysis, we apply three increasingly progressive levels of text processing, ranging from normalization to stemming, to reduce the impact of lexical differences. Annotations of our corpus consisted of over 76,968 distinct tags and 21,129 distinct MeSH terms. The top 20 tags/MeSH terms showed little direct overlap. On a paper-by-paper basis, the number of common annotations ranged from 0.29 to 0.5 and the Jaccard similarity from 2.12% to 3.3% using increased levels of text processing. At most, 77,834 citations (33.6%) shared at least one annotation. Our results show that CiteULike tags and MeSH terms are quite distinct lexically, reflecting different viewpoints/processes between social tagging and controlled indexing.
  5. Deaves, J.C.; Pache, J.E.: Chemical and numerical indexing for the INSPEC database (1989) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 2289) [ClassicSimilarity], result of:
          0.008834538 = score(doc=2289,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 2289, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2289)
      0.16666667 = coord(1/6)
    
    Abstract
    The wealth of chemical information on the INSPEC database is easily retrieved using the printed subject indexes to the associated abstract journals. However, this subject indexing is insufficient for machine retrieval, and free-text searching has special difficulties. An easy-to-use retrieval system has been developed which overcomes many problems, especially the retrieval of non-stoichiometric compositions, which are a feature solid-state chemistry. The scheme is limited to inorganic material, but allows flexibility and identification of dopants, interfaces and surfaces or substrates. At the same time, a system has been introduced for the online retrieval of numerical data included in the data base. This has successfully standardized the way in which such data is held for searching, enabling further refinement of searches where numerical information is significant
  6. Brenner, S.H.; McKinin, E.J.: CINAHL and MEDLINE : a comparison of indexing practices (1989) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 2843) [ClassicSimilarity], result of:
          0.008834538 = score(doc=2843,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 2843, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2843)
      0.16666667 = coord(1/6)
    
    Abstract
    A random sample of 50 nursing articles indexed in both MEDLINE and CINAHL during 1986 was used for comparing indexing pratices. Indexing was analysed by counting the number of major descriptors, the number of major and minor descriptors, the number of indexing access points, the number of common indexing access points, and the number and type of unique indexing points. The study results indicate: there are few differences in the number of major descriptors used, MEDLINE uses almost twice as many descriptors, MEDLINE has almost twice as many indexing access points, and MEDLINE and CINAHL provide few common access points.
  7. Soergel, D.: Indexing and retrieval performance : the logical evidence (1994) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 579) [ClassicSimilarity], result of:
          0.008834538 = score(doc=579,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 579, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=579)
      0.16666667 = coord(1/6)
    
    Abstract
    This article presents a logical analysis of the characteristics of indexing and their effects on retrieval performance.It establishes the ability to ask the questions one needs to ask as the foundation of performance evaluation, and recall and discrimination as the basic quantitative performance measures for binary noninteractive retrieval systems. It then defines the characteristics of indexing that affect retrieval - namely, indexing devices, viewpoint-based and importance-based indexing exhaustivity, indexing specifity, indexing correctness, and indexing consistency - and examines in detail their effects on retrieval. It concludes that retrieval performance depends chiefly on the match between indexing and the requirements of the individual query and on the adaption of the query formulation to the characteristics of the retrieval system, and that the ensuing complexity must be considered in the design and testing of retrieval systems
  8. Lin, Y,-l.; Trattner, C.; Brusilovsky, P.; He, D.: ¬The impact of image descriptions on user tagging behavior : a study of the nature and functionality of crowdsourced tags (2015) 0.00
    0.0014573209 = product of:
      0.008743925 = sum of:
        0.008743925 = weight(_text_:in in 2159) [ClassicSimilarity], result of:
          0.008743925 = score(doc=2159,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 2159, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2159)
      0.16666667 = coord(1/6)
    
    Abstract
    Crowdsourcing has emerged as a way to harvest social wisdom from thousands of volunteers to perform a series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design on user tagging behavior. Although images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering insight to the curators of digital image libraries who face this dilemma by examining (i) how descriptions influence the user in his/her tagging behavior and (ii) how this relates to the (a) nature of the tags, (b) the emergent folksonomy, and (c) the findability of the images in the tagging system. We compared two different methods for collecting image tags from Amazon's Mechanical Turk's crowdworkers-with and without image descriptions. Several properties of generated tags were examined from different perspectives: diversity, specificity, reusability, quality, similarity, descriptiveness, and so on. In addition, the study was carried out to examine the impact of image description on supporting users' information seeking with a tag cloud interface. The results showed that the properties of tags are affected by the crowdsourcing approach. Tags from the "with description" condition are more diverse and more specific than tags from the "without description" condition, while the latter has a higher tag reuse rate. A user study also revealed that different tag sets provided different support for search. Tags produced "with description" shortened the path to the target results, whereas tags produced without description increased user success in the search task.
  9. Moreiro-González, J.-A.; Bolaños-Mejías, C.: Folksonomy indexing from the assignment of free tags to setup subject : a search analysis into the domain of legal history (2018) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 4640) [ClassicSimilarity], result of:
          0.007728611 = score(doc=4640,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 4640, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4640)
      0.16666667 = coord(1/6)
    
    Abstract
    The behaviour and lexical quality of the folksonomies is examined by comparing two online social networks: Library-Thing (for books) and Flickr (for photos). We presented a case study that combines quantitative and qualitative elements, singularized by the lexical and functional framework. Our query was made by "Legal History" and by the synonyms "Law History" and "History of Law." We then examined the relevance, consistency and precision of the tags attached to the retrieved documents, in addition to their lexical composition. We identified the difficulties caused by free tagging and some of the folksonomy solutions that have been found to solve them. The results are presented in comparative tables, giving special attention to related tags within each retrieved document. Although the number of ambiguous or inconsistent tags is not very large, these do nevertheless represent the most obvious problem to search and retrieval in folksonomies. Relevance is high when the terms are assigned by especially competent taggers. Even with less expert taggers, ambiguity is often successfully corrected by contextualizing the concepts within related tags. A propinquity to associative and taxonomic lexical semantic knowledge is reached via contextual relationships.
  10. David, C.; Giroux, L.; Bertrand-Gastaldy, S.; Lanteigne, D.: Indexing as problem solving : a cognitive approach to consistency (1995) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 3609) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=3609,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 3609, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3609)
      0.16666667 = coord(1/6)
    
    Abstract
    Indexers differ in their judgement as to which terms reflect adequately the content of a document. Studies of interindexers' consistency identified several factors associated with low consistency, but failed to provide a comprehensive model of this phenomenon. Our research applies theories and methods from cognitive psychology to the study of indexing behavior. From a theoretical standpoint, indexing is considered as a problem solving situation. To access to the cognitive processes of indexers, 3 kinds of verbal reports are used. We will present results of an experiment in which 4 experienced indexers indexed the same documents. It will be shown that the 3 kinds of verbal reports provide complementary data on strategic behavior, and that it is of prime importance to consider the indexing task as an ill-defined problem, where the solution is partly defined by the indexer him(her)self
  11. Losee, R.: ¬A performance model of the length and number of subject headings and index phrases (2004) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 3725) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=3725,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 3725, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3725)
      0.16666667 = coord(1/6)
    
    Abstract
    When assigning subject headings or index terms to a document, how many terms or phrases should be used to represent the document? The contribution of an indexing phrase to locating and ordering documents can be compared to the contribution of a full-text query to finding documents. The length and number of phrases needed to equal the contribution of a full-text query is the subject of this paper. The appropriate number of phrases is determined in part by the length of the phrases. We suggest several rules that may be used to determine how many subject headings should be assigned, given index phrase lengths, and provide a general model for this process. A difference between characteristics of indexing "hard" science and "social" science literature is suggested.
    Footnote
    Die Aussagen dieses Beitrages könnten einmal mit den RSWK in Verbindung gebracht werden
  12. Huffman, G.D.; Vital, D.A.; Bivins, R.G.: Generating indices with lexical association methods : term uniqueness (1990) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 4152) [ClassicSimilarity], result of:
          0.006310384 = score(doc=4152,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 4152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
      0.16666667 = coord(1/6)
    
    Abstract
    A software system has been developed which orders citations retrieved from an online database in terms of relevancy. The system resulted from an effort generated by NASA's Technology Utilization Program to create new advanced software tools to largely automate the process of determining relevancy of database citations retrieved to support large technology transfer studies. The ranking is based on the generation of an enriched vocabulary using lexical association methods, a user assessment of the vocabulary and a combination of the user assessment and the lexical metric. One of the key elements in relevancy ranking is the enriched vocabulary -the terms mst be both unique and descriptive. This paper examines term uniqueness. Six lexical association methods were employed to generate characteristic word indices. A limited subset of the terms - the highest 20,40,60 and 7,5% of the uniquess words - we compared and uniquess factors developed. Computational times were also measured. It was found that methods based on occurrences and signal produced virtually the same terms. The limited subset of terms producedby the exact and centroid discrimination value were also nearly identical. Unique terms sets were produced by teh occurrence, variance and discrimination value (centroid), An end-user evaluation showed that the generated terms were largely distinct and had values of word precision which were consistent with values of the search precision.
  13. Hersh, W.R.; Hickam, D.H.: ¬A comparison of two methods for indexing and retrieval from a full-text medical database (1992) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 4526) [ClassicSimilarity], result of:
          0.006246961 = score(doc=4526,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 4526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4526)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports results of a study of 2 information retrieval systems on a 2.000 document full text medical database. The first system, SAPHIRE, features concept based automatic indexing and statistical retrieval techniques, while the second system, SWORD, features traditional word based Boolean techniques, 16 medical students at Oregon Health Sciences Univ. each performed 10 searches and their results, recorded in terms of recall and precision, showed nearly equal performance for both systems. SAPHIRE was also compared with a version of SWORD modified to use automatic indexing and ranked retrieval. Using batch input of queries, the latter method performed slightly better
  14. Iivonen, M.: Interindexer consistency and the indexing environment (1990) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 3593) [ClassicSimilarity], result of:
          0.006246961 = score(doc=3593,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 3593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3593)
      0.16666667 = coord(1/6)
    
    Abstract
    Considers the interindexer consistency between indexers working in various organisations and reports on the result of an empirical study. The interindexer consistency was low, but there were clear differences depending on whether the consistency was calculated on the basis to terms or concepts or aspects. The fact that the consistency figures remained low can be explained. The low indexing consistency caused by indexing errors also seems to be difficult to control. Indexing consistency and its control have a clear impact on how feasible and useful centralised services and union catalogues are and can be from the point of view of subject description.
  15. Westerman, S.J.; Cribbin, T.; Collins, J.: Human assessments of document similarity (2010) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3915) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3915,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3915)
      0.16666667 = coord(1/6)
    
    Abstract
    Two studies are reported that examined the reliability of human assessments of document similarity and the association between human ratings and the results of n-gram automatic text analysis (ATA). Human interassessor reliability (IAR) was moderate to poor. However, correlations between average human ratings and n-gram solutions were strong. The average correlation between ATA and individual human solutions was greater than IAR. N-gram length influenced the strength of association, but optimum string length depended on the nature of the text (technical vs. nontechnical). We conclude that the methodology applied in previous studies may have led to overoptimistic views on human reliability, but that an optimal n-gram solution can provide a good approximation of the average human assessment of document similarity, a result that has important implications for future development of document visualization systems.

Authors

Languages

  • e 71
  • d 2
  • chi 1
  • nl 1
  • More… Less…

Types

  • a 72
  • m 1
  • r 1
  • x 1
  • More… Less…