Search (20 results, page 1 of 1)

  • × theme_ss:"Inhaltsanalyse"
  1. Lebrecht, H.: Methoden und Probleme der Bilderschließung am Beispiel des verteilten digitalen Bildarchivs Prometheus (2003) 0.01
    0.0130742295 = product of:
      0.0915196 = sum of:
        0.0915196 = weight(_text_:bewusstsein in 2508) [ClassicSimilarity], result of:
          0.0915196 = score(doc=2508,freq=2.0), product of:
            0.25272697 = queryWeight, product of:
              6.5552235 = idf(docFreq=170, maxDocs=44218)
              0.038553525 = queryNorm
            0.36212835 = fieldWeight in 2508, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5552235 = idf(docFreq=170, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2508)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Erschließung von Bildern ist ein Gebiet, welches aufgrund der speziellen Eigenschaften des Mediums Bild von der Texterschließung zu unterscheiden ist. In Museen, Archiven, Universitäten und anderen Einrichtungen werden Bildsammlungen schon länger erschlossen. Viele Sammlungen bleiben jedoch unangetastet, da es für die Bilderschließung noch immer an passend zugeschnittenen Erschließungsinstrumenten und Erschließungsmethoden mangelt. Es existieren keine allgemeingültigen Standards, auch deshalb, weil die zu verzeichnenden Sammlungen vielen verschiedenen Instituten unterschiedlicher Wissenschaftsfächer angehören und sie dort unterschiedlichen Zwecken dienen. Diese Arbeit beginnt mit einer Einführung zur Kommunikationstheorie, um damit das Bewusstsein über die Komplexität der von Bildern vermittelten, visuellen Informationen zu schärfen. Anschließend werden Bildsammlungen typologisch sortiert, bevor im einzelnen auf die Theorie der Formal- und Inhaltserschließung von Bildern eingegangen wird. Dabei werden verschiedene Erschließungsinstrumente und -methoden, jeweils unter Einbindung von Beispielen, vorgestellt und ihre Anwendbarkeit für die Bilderschließung beurteilt. Der zweite Teil der Arbeit ist an das Projekt "Prometheus - Das verteilte digitale Bildarchiv für Forschung und Lehre" gebunden. Über Prometheus werden heterogen erschlossene, digitalisierte Bildbestände unter einer gemeinsamen Retrievaloberfläche verfügbar gemacht. Nach einer Einführung in das Projekt, die intendierten Ziele und die Vorstel lung der Techniken, welche das Retrieval über autonom erstellte Datenbanken ermöglichen, werden praktizierte Erschließungsmethoden einzelner, an Prometheus beteiligter Institute, beispielhaft dargestellt. Die sich zuvor in den verschiedenen Kapiteln andeutenden oder schon festgestellten Problematiken der Bilderschließung werden zum Schluss noch einmal zusammengefasst und diskutiert, wobei sie verschiedenen Ebenen, weshalb sie auftreten und worauf sie sich auswirken, zugeordnet werden können.
  2. Lebrecht, H.: Methoden und Probleme der Bilderschließung (2003) 0.01
    0.0130742295 = product of:
      0.0915196 = sum of:
        0.0915196 = weight(_text_:bewusstsein in 2871) [ClassicSimilarity], result of:
          0.0915196 = score(doc=2871,freq=2.0), product of:
            0.25272697 = queryWeight, product of:
              6.5552235 = idf(docFreq=170, maxDocs=44218)
              0.038553525 = queryNorm
            0.36212835 = fieldWeight in 2871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5552235 = idf(docFreq=170, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2871)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Erschließung von Bildern ist ein Gebiet, welches aufgrund der speziellen Eigenschaften des Mediums Bild von der Texterschließung zu unterscheiden ist. In Museen, Archiven, Universitäten und anderen Einrichtungen werden Bildsammlungen schon länger erschlossen. Viele Sammlungen bleiben jedoch unangetastet, da es für die Bilderschließung noch immer an passend zugeschnittenen Erschließungsinstrumenten und Erschließungsmethoden mangelt. Es existieren keine allgemeingültigen Standards, auch deshalb, weil die zu verzeichnenden Sammlungen vielen verschiedenen Instituten unterschiedlicher Wissenschaftsfächer angehören und sie dort unterschiedlichen Zwecken dienen. Diese Arbeit beginnt mit einer Einführung zur Kommunikationstheorie, um damit das Bewusstsein über die Komplexität der von Bildern vermittelten, visuellen Informationen zu schärfen. Anschließend werden Bildsammlungen typologisch sortiert, bevor im einzelnen auf die Theorie der Formal- und Inhaltserschließung von Bildern eingegangen wird. Dabei werden verschiedene Erschließungsinstrumente und -methoden, jeweils unter Einbindung von Beispielen, vorgestellt und ihre Anwendbarkeit für die Bilderschließung beurteilt. Der zweite Teil der Arbeit ist an das Projekt "Prometheus - Das verteilte digitale Bildarchiv für Forschung und Lehre" gebunden. Über Prometheus werden heterogen erschlossene, digitalisierte Bildbestände unter einer gemeinsamen Retrievaloberfläche verfügbar gemacht. Nach einer Einführung in das Projekt, die intendierten Ziele und die Vorstel lung der Techniken, welche das Retrieval über autonom erstellte Datenbanken ermöglichen, werden praktizierte Erschließungsmethoden einzelner, an Prometheus beteiligter Institute, beispielhaft dargestellt. Die sich zuvor in den verschiedenen Kapiteln andeutenden oder schon festgestellten Problematiken der Bilderschließung werden zum Schluss noch einmal zusammengefasst und diskutiert, wobei sie verschiedenen Ebenen, weshalb sie auftreten und worauf sie sich auswirken, zugeordnet werden können.
  3. Hicks, C.; Rush, J.; Strong, S.: Content analysis (1977) 0.01
    0.01300317 = product of:
      0.091022186 = sum of:
        0.091022186 = weight(_text_:computer in 7514) [ClassicSimilarity], result of:
          0.091022186 = score(doc=7514,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.6460321 = fieldWeight in 7514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.125 = fieldNorm(doc=7514)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of computer science and technology, vol.6
  4. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.01
    0.01300317 = product of:
      0.091022186 = sum of:
        0.091022186 = weight(_text_:computer in 6830) [ClassicSimilarity], result of:
          0.091022186 = score(doc=6830,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.6460321 = fieldWeight in 6830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
      0.14285715 = coord(1/7)
    
  5. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.01
    0.011493287 = product of:
      0.08045301 = sum of:
        0.08045301 = weight(_text_:computer in 4236) [ClassicSimilarity], result of:
          0.08045301 = score(doc=4236,freq=4.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.5710171 = fieldWeight in 4236, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=4236)
      0.14285715 = coord(1/7)
    
    Source
    Social science computer review. 11(1993) no.3, S.283-291
  6. Krause, J.: Principles of content analysis for information retrieval systems : an overview (1996) 0.01
    0.011377773 = product of:
      0.07964441 = sum of:
        0.07964441 = weight(_text_:computer in 5270) [ClassicSimilarity], result of:
          0.07964441 = score(doc=5270,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.56527805 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
      0.14285715 = coord(1/7)
    
    Source
    Text analysis and computer. Ed.: C. Züll et al
  7. Pejtersen, A.M.: Design of a computer-aided user-system dialogue based on an analysis of users' search behaviour (1984) 0.01
    0.009752377 = product of:
      0.06826664 = sum of:
        0.06826664 = weight(_text_:computer in 1044) [ClassicSimilarity], result of:
          0.06826664 = score(doc=1044,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.48452407 = fieldWeight in 1044, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.09375 = fieldNorm(doc=1044)
      0.14285715 = coord(1/7)
    
  8. From information to knowledge : conceptual and content analysis by computer (1995) 0.01
    0.009086241 = product of:
      0.063603684 = sum of:
        0.063603684 = weight(_text_:computer in 5392) [ClassicSimilarity], result of:
          0.063603684 = score(doc=5392,freq=10.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.45142862 = fieldWeight in 5392, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
      0.14285715 = coord(1/7)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
  9. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.01
    0.008045301 = product of:
      0.056317106 = sum of:
        0.056317106 = weight(_text_:computer in 5828) [ClassicSimilarity], result of:
          0.056317106 = score(doc=5828,freq=4.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.39971197 = fieldWeight in 5828, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
      0.14285715 = coord(1/7)
    
    Abstract
    Issues concerning the formulation and application of a model of how humans value information are examined. Formulation of a value function is based on research from modelling, value assessment, human information seeking behavior, and human decision making. The proposed function is incorporated into a computer-based fiction retrieval system and evaluated using data from nine searches. Evaluation is based on the ability of an individual's value function to discriminate among novels selected, rejected, and not considered. The results are discussed in terms of both formulation and utilization of a value function as well as the implications for extending the proposed formulation to other information seeking environments
  10. Winget, M.: Describing art : an alternative approach to subject access and interpretation (2009) 0.00
    0.004063491 = product of:
      0.028444434 = sum of:
        0.028444434 = weight(_text_:computer in 3618) [ClassicSimilarity], result of:
          0.028444434 = score(doc=3618,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.20188503 = fieldWeight in 3618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3618)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this paper is to examine the art historical antecedents of providing subject access to images. After reviewing the assumptions and limitations inherent in the most prevalent descriptive method, the paper seeks to introduce a new model that allows for more comprehensive representation of visually-based cultural materials. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Panofsky's theory of iconography and iconology as the starting-point. Panofsky's conceptual model, while appropriate for art created in the Western academic tradition, ignores or misrepresents work from other eras or cultures. Continued dependence on Panofskian descriptive methods limits the functionality and usefulness of image representation systems. Findings - The paper recommends the development of a more precise and inclusive descriptive model for art objects, which is based on the premise that art is not another sort of text, and should not be interpreted as such. Practical implications - The paper provides suggestions for the development of representation models that will enhance the description of non-textual artifacts. Originality/value - The paper addresses issues in information science, the history of art, and computer science, and suggests that a new descriptive model would be of great value to both humanist and social science scholars.
  11. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.0037310505 = product of:
      0.026117353 = sum of:
        0.026117353 = product of:
          0.052234706 = sum of:
            0.052234706 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.052234706 = score(doc=5835,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 8.2006 13:22:44
  12. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.00
    0.0029848402 = product of:
      0.020893881 = sum of:
        0.020893881 = product of:
          0.041787762 = sum of:
            0.041787762 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.041787762 = score(doc=5830,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 8.2006 13:22:08
  13. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.00
    0.0029848402 = product of:
      0.020893881 = sum of:
        0.020893881 = product of:
          0.041787762 = sum of:
            0.041787762 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.041787762 = score(doc=251,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 5.2021 12:43:05
  14. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.00
    0.0028444433 = product of:
      0.019911103 = sum of:
        0.019911103 = weight(_text_:computer in 133) [ClassicSimilarity], result of:
          0.019911103 = score(doc=133,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.14131951 = fieldWeight in 133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
      0.14285715 = coord(1/7)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  15. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.00
    0.002638251 = product of:
      0.018467756 = sum of:
        0.018467756 = product of:
          0.036935512 = sum of:
            0.036935512 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.036935512 = score(doc=4888,freq=4.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  16. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.00
    0.0022386303 = product of:
      0.015670411 = sum of:
        0.015670411 = product of:
          0.031340823 = sum of:
            0.031340823 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.031340823 = score(doc=6525,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  17. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.00
    0.0022386303 = product of:
      0.015670411 = sum of:
        0.015670411 = product of:
          0.031340823 = sum of:
            0.031340823 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.031340823 = score(doc=1416,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.00
    0.0022386303 = product of:
      0.015670411 = sum of:
        0.015670411 = product of:
          0.031340823 = sum of:
            0.031340823 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.031340823 = score(doc=5589,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Library trends. 55(2006) no.1, S.22-45
  19. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.00
    0.0011193152 = product of:
      0.007835206 = sum of:
        0.007835206 = product of:
          0.015670411 = sum of:
            0.015670411 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.015670411 = score(doc=2293,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    27. 9.2005 14:22:19
  20. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    7.4621005E-4 = product of:
      0.0052234703 = sum of:
        0.0052234703 = product of:
          0.010446941 = sum of:
            0.010446941 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.010446941 = score(doc=1858,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.1997 19:16:05