Search (151 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. From information to knowledge : conceptual and content analysis by computer (1995) 0.04
    0.04314049 = product of:
      0.10066114 = sum of:
        0.040828865 = weight(_text_:g in 5392) [ClassicSimilarity], result of:
          0.040828865 = score(doc=5392,freq=4.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.2934334 = fieldWeight in 5392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
        0.05374823 = weight(_text_:u in 5392) [ClassicSimilarity], result of:
          0.05374823 = score(doc=5392,freq=12.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.44308627 = fieldWeight in 5392, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
        0.0060840435 = weight(_text_:a in 5392) [ClassicSimilarity], result of:
          0.0060840435 = score(doc=5392,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.14243183 = fieldWeight in 5392, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
      0.42857143 = coord(3/7)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
    Editor
    Nissan, E. u. K. Schmidt
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.03
    0.03286155 = product of:
      0.07667695 = sum of:
        0.043885246 = weight(_text_:u in 5835) [ClassicSimilarity], result of:
          0.043885246 = score(doc=5835,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.3617784 = fieldWeight in 5835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.007695774 = weight(_text_:a in 5835) [ClassicSimilarity], result of:
          0.007695774 = score(doc=5835,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.18016359 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.025095932 = product of:
          0.050191864 = sum of:
            0.050191864 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.050191864 = score(doc=5835,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
    Type
    a
  3. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.03
    0.029862674 = product of:
      0.10451935 = sum of:
        0.09798927 = weight(_text_:g in 7292) [ClassicSimilarity], result of:
          0.09798927 = score(doc=7292,freq=4.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.70424014 = fieldWeight in 7292, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.006530081 = weight(_text_:a in 7292) [ClassicSimilarity], result of:
          0.006530081 = score(doc=7292,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.15287387 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
      0.2857143 = coord(2/7)
    
    Type
    a
  4. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.03
    0.027382165 = product of:
      0.06389172 = sum of:
        0.035108197 = weight(_text_:u in 5830) [ClassicSimilarity], result of:
          0.035108197 = score(doc=5830,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.28942272 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.008706774 = weight(_text_:a in 5830) [ClassicSimilarity], result of:
          0.008706774 = score(doc=5830,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 5830, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.020076746 = product of:
          0.040153492 = sum of:
            0.040153492 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.040153492 = score(doc=5830,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
    Type
    a
  5. Hays, D.G.: Linguistic foundations of the theory of content analysis (1969) 0.03
    0.025272988 = product of:
      0.08845545 = sum of:
        0.080837026 = weight(_text_:g in 118) [ClassicSimilarity], result of:
          0.080837026 = score(doc=118,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.5809685 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.109375 = fieldNorm(doc=118)
        0.0076184273 = weight(_text_:a in 118) [ClassicSimilarity], result of:
          0.0076184273 = score(doc=118,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.17835285 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=118)
      0.2857143 = coord(2/7)
    
    Source
    The analysis of communication content. Ed.: G. Gerbner et al
    Type
    a
  6. Hutchins, W.J.: ¬The concept of 'aboutness' in subject indexing (1978) 0.02
    0.02273509 = product of:
      0.053048544 = sum of:
        0.018519659 = product of:
          0.037039317 = sum of:
            0.037039317 = weight(_text_:p in 1961) [ClassicSimilarity], result of:
              0.037039317 = score(doc=1961,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.27807623 = fieldWeight in 1961, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1961)
          0.5 = coord(1/2)
        0.030719671 = weight(_text_:u in 1961) [ClassicSimilarity], result of:
          0.030719671 = score(doc=1961,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.25324488 = fieldWeight in 1961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
        0.0038092136 = weight(_text_:a in 1961) [ClassicSimilarity], result of:
          0.0038092136 = score(doc=1961,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.089176424 = fieldWeight in 1961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
      0.42857143 = coord(3/7)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.93-97.
    Type
    a
  7. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.02
    0.016912106 = product of:
      0.05919237 = sum of:
        0.05266229 = weight(_text_:u in 1329) [ClassicSimilarity], result of:
          0.05266229 = score(doc=1329,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.43413407 = fieldWeight in 1329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=1329)
        0.006530081 = weight(_text_:a in 1329) [ClassicSimilarity], result of:
          0.006530081 = score(doc=1329,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.15287387 = fieldWeight in 1329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1329)
      0.2857143 = coord(2/7)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
    Type
    a
  8. Wersig, G.: Inhaltsanalyse : Einführung in ihre Systematik und Literatur (1968) 0.02
    0.016497353 = product of:
      0.115481466 = sum of:
        0.115481466 = weight(_text_:g in 2386) [ClassicSimilarity], result of:
          0.115481466 = score(doc=2386,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.829955 = fieldWeight in 2386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.15625 = fieldNorm(doc=2386)
      0.14285715 = coord(1/7)
    
  9. Baxendale, P.: Content analysis, specification and control (1966) 0.01
    0.014582122 = product of:
      0.051037423 = sum of:
        0.04233065 = product of:
          0.0846613 = sum of:
            0.0846613 = weight(_text_:p in 218) [ClassicSimilarity], result of:
              0.0846613 = score(doc=218,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.63560283 = fieldWeight in 218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=218)
          0.5 = coord(1/2)
        0.008706774 = weight(_text_:a in 218) [ClassicSimilarity], result of:
          0.008706774 = score(doc=218,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=218)
      0.2857143 = coord(2/7)
    
    Type
    a
  10. Holley, R.M.; Joudrey, D.N.: Aboutness and conceptual analysis : a review (2021) 0.01
    0.013981764 = product of:
      0.048936173 = sum of:
        0.040418513 = weight(_text_:g in 703) [ClassicSimilarity], result of:
          0.040418513 = score(doc=703,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.29048425 = fieldWeight in 703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=703)
        0.008517661 = weight(_text_:a in 703) [ClassicSimilarity], result of:
          0.008517661 = score(doc=703,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.19940455 = fieldWeight in 703, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=703)
      0.2857143 = coord(2/7)
    
    Abstract
    The purpose of this paper is to provide an overview of aboutness and conceptual analysis, essential concepts for LIS practitioners to understand. Aboutness refers to the subject matter and genre/form properties of a resource. It is identified during conceptual analysis, which yields an aboutness statement, a summary of a resource's aboutness. While few scholars have discussed the aboutness determination process in detail, the methods described by Patrick Wilson, D.W. Langridge, Arlene G. Taylor, and Daniel N. Joudrey provide exemplary frameworks for determining aboutness and are presented here. Discussions of how to construct an aboutness statement and the challenges associated with aboutness determination follow.
    Type
    a
  11. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.01
    0.01366223 = product of:
      0.047817804 = sum of:
        0.0065977518 = weight(_text_:a in 133) [ClassicSimilarity], result of:
          0.0065977518 = score(doc=133,freq=24.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.1544581 = fieldWeight in 133, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
        0.041220054 = product of:
          0.08244011 = sum of:
            0.08244011 = weight(_text_:sigel in 133) [ClassicSimilarity], result of:
              0.08244011 = score(doc=133,freq=2.0), product of:
                0.28102946 = queryWeight, product of:
                  7.5860133 = idf(docFreq=60, maxDocs=44218)
                  0.03704574 = queryNorm
                0.2933504 = fieldWeight in 133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.5860133 = idf(docFreq=60, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
    Type
    a
  12. Riesthuis, G.J.A.; Stuurman, P.: Tendenzen in de onderwerpsontsluiting : T.1: Inhoudsanalyse (1989) 0.01
    0.012759356 = product of:
      0.044657744 = sum of:
        0.037039317 = product of:
          0.074078634 = sum of:
            0.074078634 = weight(_text_:p in 1841) [ClassicSimilarity], result of:
              0.074078634 = score(doc=1841,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.55615246 = fieldWeight in 1841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1841)
          0.5 = coord(1/2)
        0.0076184273 = weight(_text_:a in 1841) [ClassicSimilarity], result of:
          0.0076184273 = score(doc=1841,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.17835285 = fieldWeight in 1841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1841)
      0.2857143 = coord(2/7)
    
    Type
    a
  13. Moraes, J.B.E. de: Aboutness in fiction : methodological perspectives for knowledge organization (2012) 0.01
    0.011789949 = product of:
      0.041264817 = sum of:
        0.035108197 = weight(_text_:u in 856) [ClassicSimilarity], result of:
          0.035108197 = score(doc=856,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.28942272 = fieldWeight in 856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=856)
        0.006156619 = weight(_text_:a in 856) [ClassicSimilarity], result of:
          0.006156619 = score(doc=856,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.14413087 = fieldWeight in 856, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=856)
      0.2857143 = coord(2/7)
    
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  14. Pozzi de Sousa, B.; Ortega, C.D.: Aspects regarding the notion of subject in the context of different theoretical trends : teaching approaches in Brazil (2018) 0.01
    0.011274738 = product of:
      0.039461583 = sum of:
        0.035108197 = weight(_text_:u in 4707) [ClassicSimilarity], result of:
          0.035108197 = score(doc=4707,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.28942272 = fieldWeight in 4707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.004353387 = weight(_text_:a in 4707) [ClassicSimilarity], result of:
          0.004353387 = score(doc=4707,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10191591 = fieldWeight in 4707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
      0.2857143 = coord(2/7)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
    Type
    a
  15. Miene, A.; Hermes, T.; Ioannidis, G.: Wie kommt das Bild in die Datenbank? : Inhaltsbasierte Analyse von Bildern und Videos (2002) 0.01
    0.011217687 = product of:
      0.039261904 = sum of:
        0.03464444 = weight(_text_:g in 213) [ClassicSimilarity], result of:
          0.03464444 = score(doc=213,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.24898648 = fieldWeight in 213, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
        0.0046174643 = weight(_text_:a in 213) [ClassicSimilarity], result of:
          0.0046174643 = score(doc=213,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10809815 = fieldWeight in 213, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
      0.2857143 = coord(2/7)
    
    Type
    a
  16. Campbell, G.: Queer theory and the creation of contextual subject access tools for gay and lesbian communities (2000) 0.01
    0.01044747 = product of:
      0.036566142 = sum of:
        0.028870367 = weight(_text_:g in 6054) [ClassicSimilarity], result of:
          0.028870367 = score(doc=6054,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.20748875 = fieldWeight in 6054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
        0.007695774 = weight(_text_:a in 6054) [ClassicSimilarity], result of:
          0.007695774 = score(doc=6054,freq=16.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.18016359 = fieldWeight in 6054, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
      0.2857143 = coord(2/7)
    
    Abstract
    Knowledge organization research has come to question the theoretical distinction between "aboutness" (a document's innate content) and "meaning" (the use to which a document is put). This distinction has relevance beyond Information Studies, particularly in relation to homosexual concerns. Literary criticism, in particular, frequently addresses the question: when is a work "about" homosexuality? This paper explores this literary debate and its implications for the design of subject access systems for gay and lesbian communities. By examining the literary criticism of Herman Melville's Billy Budd, particularly in relation to the theories of Eve Kosofsky Sedgwick in The Epistemology of the Closet (1990), this paper exposes three tensions that designers of gay and lesbian classifications and vocabularies can expect to face. First is a tension between essentialist and constructivist views of homosexuality, which will affect the choice of terms, categories, and references. Second is a tension between minoritizing and universalizing perspectives on homosexuality. Third is a redefined distinction between aboutness and meaning, in which aboutness refers not to stable document content, but to the system designer's inescapable social and ideological perspectives. Designers of subject access systems can therefore expect to work in a context of intense scrutiny and persistent controversy
    Type
    a
  17. Volpers, H.: Inhaltsanalyse (2013) 0.01
    0.009865396 = product of:
      0.034528885 = sum of:
        0.030719671 = weight(_text_:u in 1018) [ClassicSimilarity], result of:
          0.030719671 = score(doc=1018,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.25324488 = fieldWeight in 1018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
        0.0038092136 = weight(_text_:a in 1018) [ClassicSimilarity], result of:
          0.0038092136 = score(doc=1018,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.089176424 = fieldWeight in 1018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
      0.2857143 = coord(2/7)
    
    Source
    Handbuch Methoden der Bibliotheks- und Informationswissenschaft: Bibliotheks-, Benutzerforschung, Informationsanalyse. Hrsg.: K. Umlauf, S. Fühles-Ubach u. M.S. Seadle
    Type
    a
  18. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.01
    0.009595156 = product of:
      0.033583045 = sum of:
        0.028870367 = weight(_text_:g in 4972) [ClassicSimilarity], result of:
          0.028870367 = score(doc=4972,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.20748875 = fieldWeight in 4972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
        0.00471268 = weight(_text_:a in 4972) [ClassicSimilarity], result of:
          0.00471268 = score(doc=4972,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.11032722 = fieldWeight in 4972, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.2857143 = coord(2/7)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
    Type
    a
  19. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.009254982 = product of:
      0.021594957 = sum of:
        0.011831776 = product of:
          0.023663552 = sum of:
            0.023663552 = weight(_text_:p in 1858) [ClassicSimilarity], result of:
              0.023663552 = score(doc=1858,freq=10.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.1776564 = fieldWeight in 1858, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
        0.0047439937 = weight(_text_:a in 1858) [ClassicSimilarity], result of:
          0.0047439937 = score(doc=1858,freq=38.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.11106029 = fieldWeight in 1858, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.0050191865 = product of:
          0.010038373 = sum of:
            0.010038373 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.010038373 = score(doc=1858,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
    Arguing that catalogers need to work both quickly and accurately, Bade maintains that employing specialists is the most efficient and effective way to achieve this outcome. Far less compelling than these arguments are Bade's concluding remarks, in which he offers meager suggestions for correcting the problems as he sees them. Overall, this essay is little more than a curmudgeon's diatribe. Addressed primarily to catalogers and library administrators, the analysis presented is too superficial to assist practicing catalogers or cataloging managers in developing solutions to any systemic problems in current cataloging practice, and it presents too little evidence of pervasive problems to convince budget-conscious library administrators of a need to alter practice or to increase their investment in local cataloging operations. Indeed, the reliance upon anecdotal evidence and the apparent nit-picking that dominate the essay might tend to reinforce a negative image of catalogers in the minds of some. To his credit, Bade does provide an important reminder that it is the intellectual contributions made by thousands of erudite catalogers that have made shared cataloging a successful strategy for improving cataloging efficiency. This is an important point that often seems to be forgotten in academic libraries when focus centers an cutting costs. Had Bade focused more narrowly upon the issue of deintellectualization of cataloging and written a carefully structured essay to advance this argument, this essay might have been much more effective." - KO 29(2002) nos.3/4, S.236-237 (A. Sauperl)
  20. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.01
    0.009011058 = product of:
      0.031538703 = sum of:
        0.022449218 = product of:
          0.044898435 = sum of:
            0.044898435 = weight(_text_:p in 3651) [ClassicSimilarity], result of:
              0.044898435 = score(doc=3651,freq=16.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.33707932 = fieldWeight in 3651, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3651)
          0.5 = coord(1/2)
        0.009089487 = weight(_text_:a in 3651) [ClassicSimilarity], result of:
          0.009089487 = score(doc=3651,freq=62.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.21279141 = fieldWeight in 3651, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a

Authors

Languages

  • e 132
  • d 17
  • f 1
  • nl 1
  • More… Less…

Types

  • a 139
  • m 8
  • el 3
  • x 2
  • d 1
  • s 1
  • More… Less…