Search (138 results, page 1 of 7)

  • × theme_ss:"Inhaltsanalyse"
  • × type_ss:"a"
  1. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.08
    0.079100996 = product of:
      0.118651494 = sum of:
        0.0047769374 = weight(_text_:s in 251) [ClassicSimilarity], result of:
          0.0047769374 = score(doc=251,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.09609913 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.113874555 = sum of:
          0.06431919 = weight(_text_:von in 251) [ClassicSimilarity], result of:
            0.06431919 = score(doc=251,freq=10.0), product of:
              0.12197845 = queryWeight, product of:
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.045719936 = queryNorm
              0.52729964 = fieldWeight in 251, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
          0.04955536 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
            0.04955536 = score(doc=251,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.30952093 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
      0.6666667 = coord(2/3)
    
    Abstract
    Die dritte Session, die von Michael Vielhaber vom Österreichischen Rundfunk moderiert wurde, machte die Teilnehmerinnen und Teilnehmer mit zukunftsweisenden Werkzeugen und Konzepten zur KI-unterstützten Erschließung von Audio- und Videodateien bekannt. Alle vier vorgestellten Technologien bewähren sich bereits in ihren praktischen Anwendungsumgebungen.
    Date
    22. 5.2021 12:43:05
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.024628848 = product of:
      0.03694327 = sum of:
        0.0059711714 = weight(_text_:s in 5835) [ClassicSimilarity], result of:
          0.0059711714 = score(doc=5835,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.120123915 = fieldWeight in 5835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.0309721 = product of:
          0.0619442 = sum of:
            0.0619442 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.0619442 = score(doc=5835,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    5. 8.2006 13:22:44
    Pages
    S.146-159
  3. Mochmann. E.: Inhaltsanalyse in den Sozialwissenschaften (1985) 0.02
    0.024624355 = product of:
      0.036936533 = sum of:
        0.0047769374 = weight(_text_:s in 2924) [ClassicSimilarity], result of:
          0.0047769374 = score(doc=2924,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.09609913 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2924)
        0.032159597 = product of:
          0.06431919 = sum of:
            0.06431919 = weight(_text_:von in 2924) [ClassicSimilarity], result of:
              0.06431919 = score(doc=2924,freq=10.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.52729964 = fieldWeight in 2924, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die sozialwissenschaftliche Inhaltsanalyse ist ein etabliertes Verfahren zur Datengewinnung aus Kommunikationsinhalten und zu ihrer Analyse. Forschungsdesigns lassen sich festmachen an Lasswell's klassischer Frage "Who says what to whom, how and with what effect?" Neben die traditionellen Verfahren der Vercodung von Kommunikationsinhalten durch menschliche Verschlüsseler treten seit Mitte der 60er Jahre computerunterstützte Inhaltsanalyseverfahren. Die Grundprinzipien der Inhaltserschließung auf Basis von "Inhaltsanalysewörterbüchern" (General Inquirer) und auf der Basis von statistischen Assoziationsverfahren (WORDS) werden erläutert. Möglichkeiten der Beobachtung gesellschaftlicher Entwicklungen auf Basis von "Textindikatoren" werden an Beispielen aus der Analyse von Tageszeitungen, Kleingruppendiskussionen und Parteiprogrammen vorgestellt
    Source
    Sprache und Datenverarbeitung. 9(1985), S.5-10
  4. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.02
    0.019703079 = product of:
      0.029554619 = sum of:
        0.0047769374 = weight(_text_:s in 5830) [ClassicSimilarity], result of:
          0.0047769374 = score(doc=5830,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.09609913 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.02477768 = product of:
          0.04955536 = sum of:
            0.04955536 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.04955536 = score(doc=5830,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    5. 8.2006 13:22:08
    Pages
    S.39-48
  5. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.02
    0.019159146 = product of:
      0.028738718 = sum of:
        0.007165406 = weight(_text_:s in 7292) [ClassicSimilarity], result of:
          0.007165406 = score(doc=7292,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.14414869 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.021573313 = product of:
          0.043146625 = sum of:
            0.043146625 = weight(_text_:von in 7292) [ClassicSimilarity], result of:
              0.043146625 = score(doc=7292,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.35372335 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.2-13
  6. Miene, A.; Hermes, T.; Ioannidis, G.: Wie kommt das Bild in die Datenbank? : Inhaltsbasierte Analyse von Bildern und Videos (2002) 0.02
    0.018468268 = product of:
      0.0277024 = sum of:
        0.003582703 = weight(_text_:s in 213) [ClassicSimilarity], result of:
          0.003582703 = score(doc=213,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.072074346 = fieldWeight in 213, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
        0.024119698 = product of:
          0.048239395 = sum of:
            0.048239395 = weight(_text_:von in 213) [ClassicSimilarity], result of:
              0.048239395 = score(doc=213,freq=10.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.39547473 = fieldWeight in 213, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.046875 = fieldNorm(doc=213)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die verfügbare multimediale Information nimmt stetig zu, nicht zuletzt durch die Tag für Tag wachsende Zahl an neuer Information im Internet. Damit man dieser Flut Herr werden und diese Information wieder abrufbar machen kann, muss sie annotiert und geeignet in Datenbanken abgelegt werden. Hier besteht das Problem der manuellen Annotation, das einerseits durch die Ermüdung aufgrund der Routinearbeit und andererseits durch die Subjektivität des Annotierenden zu Fehlern in der Annotation führen kann. Unterstützende Systeme, die dem Dokumentar genau diese Routinearbeit abnehmen, können hier bis zu einem gewissen Grad Abhilfe schaffen. Die wissenschaftliche Erschließung von beispielsweise filmbeiträgen wird der Dokumentar zwar immer noch selbst machen müssen und auch sollen, aber die Erkennung und Dokumentation von sog. Einstellungsgrenzen kann durchaus automatisch mit Unterstützung eines Rechners geschehen. In diesem Beitrag zeigen wir anhand von Projekten, die wir durchgeführt haben, wie weit diese Unterstützung des Dokumentars bei der Annotation von Bildern und Videos gehen kann
    Source
    Information - Wissenschaft und Praxis. 53(2002) H.1, S.15-21
  7. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.02
    0.017415226 = product of:
      0.026122838 = sum of:
        0.0042222557 = weight(_text_:s in 4888) [ClassicSimilarity], result of:
          0.0042222557 = score(doc=4888,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.08494043 = fieldWeight in 4888, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.021900583 = product of:
          0.043801166 = sum of:
            0.043801166 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.043801166 = score(doc=4888,freq=4.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  8. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.02
    0.015766647 = product of:
      0.023649968 = sum of:
        0.0050667073 = weight(_text_:s in 1416) [ClassicSimilarity], result of:
          0.0050667073 = score(doc=1416,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.101928525 = fieldWeight in 1416, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.01858326 = product of:
          0.03716652 = sum of:
            0.03716652 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.03716652 = score(doc=1416,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.144-151
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.014777309 = product of:
      0.022165963 = sum of:
        0.003582703 = weight(_text_:s in 6525) [ClassicSimilarity], result of:
          0.003582703 = score(doc=6525,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.072074346 = fieldWeight in 6525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.01858326 = product of:
          0.03716652 = sum of:
            0.03716652 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.03716652 = score(doc=6525,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  10. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.01
    0.014777309 = product of:
      0.022165963 = sum of:
        0.003582703 = weight(_text_:s in 5589) [ClassicSimilarity], result of:
          0.003582703 = score(doc=5589,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.072074346 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.01858326 = product of:
          0.03716652 = sum of:
            0.03716652 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.03716652 = score(doc=5589,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Library trends. 55(2006) no.1, S.22-45
  11. Knautz, K.; Dröge, E.; Finkelmeyer, S.; Guschauski, D.; Juchem, K.; Krzmyk, C.; Miskovic, D.; Schiefer, J.; Sen, E.; Verbina, J.; Werner, N.; Stock, W.G.: Indexieren von Emotionen bei Videos (2010) 0.01
    0.013547562 = product of:
      0.020321343 = sum of:
        0.0050667073 = weight(_text_:s in 3637) [ClassicSimilarity], result of:
          0.0050667073 = score(doc=3637,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.101928525 = fieldWeight in 3637, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.015254636 = product of:
          0.030509273 = sum of:
            0.030509273 = weight(_text_:von in 3637) [ClassicSimilarity], result of:
              0.030509273 = score(doc=3637,freq=4.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.2501202 = fieldWeight in 3637, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3637)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Gegenstand der empirischen Forschungsarbeit sind dargestellte wie empfundene Gefühle bei Videos. Sind Nutzer in der Lage, solche Gefühle derart konsistent zu erschließen, dass man deren Angaben für ein emotionales Videoretrieval gebrauchen kann? Wir arbeiten mit einem kontrollierten Vokabular für neun tionen (Liebe, Freude, Spaß, Überraschung, Sehnsucht, Trauer, Ärger, Ekel und Angst), einem Schieberegler zur Einstellung der jeweiligen Intensität des Gefühls und mit dem Ansatz der broad Folksonomy, lassen also unterschiedliche Nutzer die Videos taggen. Versuchspersonen bekamen insgesamt 20 Videos (bearbeitete Filme aus YouTube) vorgelegt, deren Emotionen sie indexieren sollten. Wir erhielten Angaben von 776 Probanden und entsprechend 279.360 Schiebereglereinstellungen. Die Konsistenz der Nutzervoten ist sehr hoch; die Tags führen zu stabilen Verteilungen der Emotionen für die einzelnen Videos. Die endgültige Form der Verteilungen wird schon bei relativ wenigen Nutzern (unter 100) erreicht. Es ist möglich, im Sinne der Power Tags die jeweils für ein Dokument zentralen Gefühle (soweit überhaupt vorhanden) zu separieren und für das emotionale Information Retrieval (EmIR) aufzubereiten.
    Source
    Information - Wissenschaft und Praxis. 61(2010) H.4, S.221-236
  12. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.012772764 = product of:
      0.019159146 = sum of:
        0.0047769374 = weight(_text_:s in 3430) [ClassicSimilarity], result of:
          0.0047769374 = score(doc=3430,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.09609913 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.014382209 = product of:
          0.028764417 = sum of:
            0.028764417 = weight(_text_:von in 3430) [ClassicSimilarity], result of:
              0.028764417 = score(doc=3430,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.23581557 = fieldWeight in 3430, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3430)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.2, S.69-78
  13. Andersen, J.: ¬The concept of genre : when, how, and why? (2001) 0.01
    0.012772764 = product of:
      0.019159146 = sum of:
        0.0047769374 = weight(_text_:s in 639) [ClassicSimilarity], result of:
          0.0047769374 = score(doc=639,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.09609913 = fieldWeight in 639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=639)
        0.014382209 = product of:
          0.028764417 = sum of:
            0.028764417 = weight(_text_:von in 639) [ClassicSimilarity], result of:
              0.028764417 = score(doc=639,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.23581557 = fieldWeight in 639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.0625 = fieldNorm(doc=639)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Bericht von einer Tagung "Genre 2001. Genres and Discourses in Education, Work and Cultural Life: Encounters of Academic Disciplines on Theories and Practices", May 13th to 16th, 2001, Oslo University College, Olso, Norway
    Source
    Knowledge organization. 28(2001) no.4, S.203-204
  14. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.01
    0.012330394 = product of:
      0.018495591 = sum of:
        0.005911159 = weight(_text_:s in 354) [ClassicSimilarity], result of:
          0.005911159 = score(doc=354,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.118916616 = fieldWeight in 354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=354)
        0.012584433 = product of:
          0.025168866 = sum of:
            0.025168866 = weight(_text_:von in 354) [ClassicSimilarity], result of:
              0.025168866 = score(doc=354,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.20633863 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Mit einer Tabelle zur Inhaltsanalyse (in der Übersetzung von Lebrecht)
    Source
    Cataloging and classification quarterly. 6(1986) no.3, S.39-62
  15. Volpers, H.: Inhaltsanalyse (2013) 0.01
    0.012330394 = product of:
      0.018495591 = sum of:
        0.005911159 = weight(_text_:s in 1018) [ClassicSimilarity], result of:
          0.005911159 = score(doc=1018,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.118916616 = fieldWeight in 1018, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
        0.012584433 = product of:
          0.025168866 = sum of:
            0.025168866 = weight(_text_:von in 1018) [ClassicSimilarity], result of:
              0.025168866 = score(doc=1018,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.20633863 = fieldWeight in 1018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1018)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Der Begriff Inhaltsanalyse wird je nach wissenschaftlicher Provenienz oder Bedeutungszusammenhang unterschiedlich definiert: Innerhalb der bibliothekarischen Praxis wird die Erfassung des Inhaltes eines vorliegenden Dokumentes für die Zwecke der Indexierung als Inhaltsanalyse bezeichnet, philologische Textinterpretationen oder sprachwissenschaftliche Textanalysen werden gelegentlich als Inhaltsanalysen etikettiert, ebenso die Interpretation von Interviewaussagen in der Psychologie und qualitativen Sozialforschung. Der vorliegende Beitrag bezieht sich explizit auf die sozialwissenschaftliche Methode der systematischen Inhaltsanalyse. Allerdings ist auch durch diese Eingrenzung noch keine hinreichende definitorische Klarheit geschaffen, da eine Unterscheidung in qualitative und quantitative Verfahren vorzunehmen ist.
    Pages
    S.412-424
    Source
    Handbuch Methoden der Bibliotheks- und Informationswissenschaft: Bibliotheks-, Benutzerforschung, Informationsanalyse. Hrsg.: K. Umlauf, S. Fühles-Ubach u. M.S. Seadle
  16. Franke-Maier, M.; Harbeck, M.: Superman = Persepolis = Naruto? : Herausforderungen und Probleme der formalen und inhaltlichen Vielfalt von Comics und Comicforschung für die Regensburger Verbundklassifikation (2016) 0.01
    0.009579573 = product of:
      0.014369359 = sum of:
        0.003582703 = weight(_text_:s in 3306) [ClassicSimilarity], result of:
          0.003582703 = score(doc=3306,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.072074346 = fieldWeight in 3306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3306)
        0.010786656 = product of:
          0.021573313 = sum of:
            0.021573313 = weight(_text_:von in 3306) [ClassicSimilarity], result of:
              0.021573313 = score(doc=3306,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.17686167 = fieldWeight in 3306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3306)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    o-bib: Das offene Bibliotheksjournal. 3(2016) Nr.4, S.243-256
  17. Hicks, C.; Rush, J.; Strong, S.: Content analysis (1977) 0.00
    0.00450374 = product of:
      0.013511219 = sum of:
        0.013511219 = weight(_text_:s in 7514) [ClassicSimilarity], result of:
          0.013511219 = score(doc=7514,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.2718094 = fieldWeight in 7514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=7514)
      0.33333334 = coord(1/3)
    
    Pages
    S. -
  18. Pejtersen, A.M.: Fiction and library classification (1978) 0.00
    0.003184625 = product of:
      0.009553875 = sum of:
        0.009553875 = weight(_text_:s in 722) [ClassicSimilarity], result of:
          0.009553875 = score(doc=722,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.19219826 = fieldWeight in 722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=722)
      0.33333334 = coord(1/3)
    
    Source
    Scandinavian public library quarterly. 11(1978), S.5-12
  19. Beghtol, C.: Bibliographic classification theory and text linguistics : aboutness, analysis, intertextuality and the cognitive act of classifying documents (1986) 0.00
    0.003184625 = product of:
      0.009553875 = sum of:
        0.009553875 = weight(_text_:s in 1346) [ClassicSimilarity], result of:
          0.009553875 = score(doc=1346,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.19219826 = fieldWeight in 1346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=1346)
      0.33333334 = coord(1/3)
    
    Source
    Journal of documentation. 42(1986), S.84-113
  20. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.00
    0.003184625 = product of:
      0.009553875 = sum of:
        0.009553875 = weight(_text_:s in 2387) [ClassicSimilarity], result of:
          0.009553875 = score(doc=2387,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.19219826 = fieldWeight in 2387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=2387)
      0.33333334 = coord(1/3)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168

Authors

Languages

  • e 124
  • d 12
  • f 1
  • nl 1
  • More… Less…