Search (40 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  • × type_ss:"a"
  1. Franke-Maier, M.; Harbeck, M.: Superman = Persepolis = Naruto? : Herausforderungen und Probleme der formalen und inhaltlichen Vielfalt von Comics und Comicforschung für die Regensburger Verbundklassifikation (2016) 0.01
    0.01333706 = product of:
      0.08002236 = sum of:
        0.022619944 = weight(_text_:und in 3306) [ClassicSimilarity], result of:
          0.022619944 = score(doc=3306,freq=20.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.46462005 = fieldWeight in 3306, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3306)
        0.017797519 = weight(_text_:der in 3306) [ClassicSimilarity], result of:
          0.017797519 = score(doc=3306,freq=12.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.36271852 = fieldWeight in 3306, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=3306)
        0.022619944 = weight(_text_:und in 3306) [ClassicSimilarity], result of:
          0.022619944 = score(doc=3306,freq=20.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.46462005 = fieldWeight in 3306, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3306)
        0.016984947 = product of:
          0.033969894 = sum of:
            0.033969894 = weight(_text_:deutschland in 3306) [ClassicSimilarity], result of:
              0.033969894 = score(doc=3306,freq=2.0), product of:
                0.10609499 = queryWeight, product of:
                  4.829954 = idf(docFreq=959, maxDocs=44218)
                  0.021966046 = queryNorm
                0.32018375 = fieldWeight in 3306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.829954 = idf(docFreq=959, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3306)
          0.5 = coord(1/2)
      0.16666667 = coord(4/24)
    
    Abstract
    Das populäre Medium Comic hat in den vergangenen Jahren zunehmend Interesse als wissenschaftliches Forschungsobjekt und damit auch als bibliothekarisches Sammlungsgut auf sich gezogen. Bibliotheken stehen vor der Aufgabe, die Primärquellen und vor allem die wissenschaftliche Forschungsliteratur zu Comics inhaltlich zu erschließen und nach wie vor auch systematisch zu präsentieren. Bis vor kurzem fand man in der in Deutschland am weitesten verbreiteten Klassifikation, der im Selbstverständnis wissenschaftsnahen Regensburger Verbundklassifikation, nur Einzelstellen in einzelnen Fachsystematiken und einen größeren Bereich in der Japanologie für das Phänomen Manga. Dieser Zustand war nicht ausreichend für eine differenzierte Aufstellung in Bibliotheken mit entsprechenden Beständen und Schwerpunkten. Der hier präsentierte RVK-Baum für Comics und Comicforschung wird diesem Desiderat gerecht und bietet eine Möglichkeit, Comicbestände adäquat klassifikatorisch abzubilden.
  2. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.012554905 = product of:
      0.07532943 = sum of:
        0.016519273 = weight(_text_:und in 3430) [ClassicSimilarity], result of:
          0.016519273 = score(doc=3430,freq=6.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.33931053 = fieldWeight in 3430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.014889815 = weight(_text_:des in 3430) [ClassicSimilarity], result of:
          0.014889815 = score(doc=3430,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.24477452 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.027401073 = weight(_text_:der in 3430) [ClassicSimilarity], result of:
          0.027401073 = score(doc=3430,freq=16.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.5584417 = fieldWeight in 3430, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.016519273 = weight(_text_:und in 3430) [ClassicSimilarity], result of:
          0.016519273 = score(doc=3430,freq=6.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.33931053 = fieldWeight in 3430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
      0.16666667 = coord(4/24)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.2, S.69-78
  3. Miene, A.; Hermes, T.; Ioannidis, G.: Wie kommt das Bild in die Datenbank? : Inhaltsbasierte Analyse von Bildern und Videos (2002) 0.01
    0.012083943 = product of:
      0.072503656 = sum of:
        0.020231893 = weight(_text_:und in 213) [ClassicSimilarity], result of:
          0.020231893 = score(doc=213,freq=16.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.41556883 = fieldWeight in 213, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
        0.015793033 = weight(_text_:des in 213) [ClassicSimilarity], result of:
          0.015793033 = score(doc=213,freq=4.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.25962257 = fieldWeight in 213, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
        0.016246837 = weight(_text_:der in 213) [ClassicSimilarity], result of:
          0.016246837 = score(doc=213,freq=10.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.3311152 = fieldWeight in 213, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
        0.020231893 = weight(_text_:und in 213) [ClassicSimilarity], result of:
          0.020231893 = score(doc=213,freq=16.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.41556883 = fieldWeight in 213, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=213)
      0.16666667 = coord(4/24)
    
    Abstract
    Die verfügbare multimediale Information nimmt stetig zu, nicht zuletzt durch die Tag für Tag wachsende Zahl an neuer Information im Internet. Damit man dieser Flut Herr werden und diese Information wieder abrufbar machen kann, muss sie annotiert und geeignet in Datenbanken abgelegt werden. Hier besteht das Problem der manuellen Annotation, das einerseits durch die Ermüdung aufgrund der Routinearbeit und andererseits durch die Subjektivität des Annotierenden zu Fehlern in der Annotation führen kann. Unterstützende Systeme, die dem Dokumentar genau diese Routinearbeit abnehmen, können hier bis zu einem gewissen Grad Abhilfe schaffen. Die wissenschaftliche Erschließung von beispielsweise filmbeiträgen wird der Dokumentar zwar immer noch selbst machen müssen und auch sollen, aber die Erkennung und Dokumentation von sog. Einstellungsgrenzen kann durchaus automatisch mit Unterstützung eines Rechners geschehen. In diesem Beitrag zeigen wir anhand von Projekten, die wir durchgeführt haben, wie weit diese Unterstützung des Dokumentars bei der Annotation von Bildern und Videos gehen kann
    Source
    Information - Wissenschaft und Praxis. 53(2002) H.1, S.15-21
  4. Volpers, H.: Inhaltsanalyse (2013) 0.01
    0.01072746 = product of:
      0.06436475 = sum of:
        0.014454362 = weight(_text_:und in 1018) [ClassicSimilarity], result of:
          0.014454362 = score(doc=1018,freq=6.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.2968967 = fieldWeight in 1018, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
        0.013028587 = weight(_text_:des in 1018) [ClassicSimilarity], result of:
          0.013028587 = score(doc=1018,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.2141777 = fieldWeight in 1018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
        0.022427438 = weight(_text_:der in 1018) [ClassicSimilarity], result of:
          0.022427438 = score(doc=1018,freq=14.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.4570776 = fieldWeight in 1018, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
        0.014454362 = weight(_text_:und in 1018) [ClassicSimilarity], result of:
          0.014454362 = score(doc=1018,freq=6.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.2968967 = fieldWeight in 1018, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
      0.16666667 = coord(4/24)
    
    Abstract
    Der Begriff Inhaltsanalyse wird je nach wissenschaftlicher Provenienz oder Bedeutungszusammenhang unterschiedlich definiert: Innerhalb der bibliothekarischen Praxis wird die Erfassung des Inhaltes eines vorliegenden Dokumentes für die Zwecke der Indexierung als Inhaltsanalyse bezeichnet, philologische Textinterpretationen oder sprachwissenschaftliche Textanalysen werden gelegentlich als Inhaltsanalysen etikettiert, ebenso die Interpretation von Interviewaussagen in der Psychologie und qualitativen Sozialforschung. Der vorliegende Beitrag bezieht sich explizit auf die sozialwissenschaftliche Methode der systematischen Inhaltsanalyse. Allerdings ist auch durch diese Eingrenzung noch keine hinreichende definitorische Klarheit geschaffen, da eine Unterscheidung in qualitative und quantitative Verfahren vorzunehmen ist.
    Source
    Handbuch Methoden der Bibliotheks- und Informationswissenschaft: Bibliotheks-, Benutzerforschung, Informationsanalyse. Hrsg.: K. Umlauf, S. Fühles-Ubach u. M.S. Seadle
  5. Knautz, K.; Dröge, E.; Finkelmeyer, S.; Guschauski, D.; Juchem, K.; Krzmyk, C.; Miskovic, D.; Schiefer, J.; Sen, E.; Verbina, J.; Werner, N.; Stock, W.G.: Indexieren von Emotionen bei Videos (2010) 0.01
    0.010617934 = product of:
      0.063707605 = sum of:
        0.015994716 = weight(_text_:und in 3637) [ClassicSimilarity], result of:
          0.015994716 = score(doc=3637,freq=10.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.328536 = fieldWeight in 3637, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.011167361 = weight(_text_:des in 3637) [ClassicSimilarity], result of:
          0.011167361 = score(doc=3637,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.18358089 = fieldWeight in 3637, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.020550804 = weight(_text_:der in 3637) [ClassicSimilarity], result of:
          0.020550804 = score(doc=3637,freq=16.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.4188313 = fieldWeight in 3637, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.015994716 = weight(_text_:und in 3637) [ClassicSimilarity], result of:
          0.015994716 = score(doc=3637,freq=10.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.328536 = fieldWeight in 3637, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
      0.16666667 = coord(4/24)
    
    Abstract
    Gegenstand der empirischen Forschungsarbeit sind dargestellte wie empfundene Gefühle bei Videos. Sind Nutzer in der Lage, solche Gefühle derart konsistent zu erschließen, dass man deren Angaben für ein emotionales Videoretrieval gebrauchen kann? Wir arbeiten mit einem kontrollierten Vokabular für neun tionen (Liebe, Freude, Spaß, Überraschung, Sehnsucht, Trauer, Ärger, Ekel und Angst), einem Schieberegler zur Einstellung der jeweiligen Intensität des Gefühls und mit dem Ansatz der broad Folksonomy, lassen also unterschiedliche Nutzer die Videos taggen. Versuchspersonen bekamen insgesamt 20 Videos (bearbeitete Filme aus YouTube) vorgelegt, deren Emotionen sie indexieren sollten. Wir erhielten Angaben von 776 Probanden und entsprechend 279.360 Schiebereglereinstellungen. Die Konsistenz der Nutzervoten ist sehr hoch; die Tags führen zu stabilen Verteilungen der Emotionen für die einzelnen Videos. Die endgültige Form der Verteilungen wird schon bei relativ wenigen Nutzern (unter 100) erreicht. Es ist möglich, im Sinne der Power Tags die jeweils für ein Dokument zentralen Gefühle (soweit überhaupt vorhanden) zu separieren und für das emotionale Information Retrieval (EmIR) aufzubereiten.
    Source
    Information - Wissenschaft und Praxis. 61(2010) H.4, S.221-236
  6. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.00995696 = product of:
      0.059741754 = sum of:
        0.019074813 = weight(_text_:und in 251) [ClassicSimilarity], result of:
          0.019074813 = score(doc=251,freq=8.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.39180204 = fieldWeight in 251, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.009687742 = weight(_text_:der in 251) [ClassicSimilarity], result of:
          0.009687742 = score(doc=251,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.19743896 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.019074813 = weight(_text_:und in 251) [ClassicSimilarity], result of:
          0.019074813 = score(doc=251,freq=8.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.39180204 = fieldWeight in 251, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.011904384 = product of:
          0.023808768 = sum of:
            0.023808768 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.023808768 = score(doc=251,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.16666667 = coord(4/24)
    
    Abstract
    Die dritte Session, die von Michael Vielhaber vom Österreichischen Rundfunk moderiert wurde, machte die Teilnehmerinnen und Teilnehmer mit zukunftsweisenden Werkzeugen und Konzepten zur KI-unterstützten Erschließung von Audio- und Videodateien bekannt. Alle vier vorgestellten Technologien bewähren sich bereits in ihren praktischen Anwendungsumgebungen.
    Date
    22. 5.2021 12:43:05
  7. Mochmann. E.: Inhaltsanalyse in den Sozialwissenschaften (1985) 0.01
    0.007734956 = product of:
      0.06187965 = sum of:
        0.019074813 = weight(_text_:und in 2924) [ClassicSimilarity], result of:
          0.019074813 = score(doc=2924,freq=8.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.39180204 = fieldWeight in 2924, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2924)
        0.023730025 = weight(_text_:der in 2924) [ClassicSimilarity], result of:
          0.023730025 = score(doc=2924,freq=12.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.4836247 = fieldWeight in 2924, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=2924)
        0.019074813 = weight(_text_:und in 2924) [ClassicSimilarity], result of:
          0.019074813 = score(doc=2924,freq=8.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.39180204 = fieldWeight in 2924, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2924)
      0.125 = coord(3/24)
    
    Abstract
    Die sozialwissenschaftliche Inhaltsanalyse ist ein etabliertes Verfahren zur Datengewinnung aus Kommunikationsinhalten und zu ihrer Analyse. Forschungsdesigns lassen sich festmachen an Lasswell's klassischer Frage "Who says what to whom, how and with what effect?" Neben die traditionellen Verfahren der Vercodung von Kommunikationsinhalten durch menschliche Verschlüsseler treten seit Mitte der 60er Jahre computerunterstützte Inhaltsanalyseverfahren. Die Grundprinzipien der Inhaltserschließung auf Basis von "Inhaltsanalysewörterbüchern" (General Inquirer) und auf der Basis von statistischen Assoziationsverfahren (WORDS) werden erläutert. Möglichkeiten der Beobachtung gesellschaftlicher Entwicklungen auf Basis von "Textindikatoren" werden an Beispielen aus der Analyse von Tageszeitungen, Kleingruppendiskussionen und Parteiprogrammen vorgestellt
    Source
    Sprache und Datenverarbeitung. 9(1985), S.5-10
  8. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.00
    0.0044959765 = product of:
      0.053951714 = sum of:
        0.026975857 = weight(_text_:und in 6830) [ClassicSimilarity], result of:
          0.026975857 = score(doc=6830,freq=4.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.55409175 = fieldWeight in 6830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
        0.026975857 = weight(_text_:und in 6830) [ClassicSimilarity], result of:
          0.026975857 = score(doc=6830,freq=4.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.55409175 = fieldWeight in 6830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
      0.083333336 = coord(2/24)
    
    Source
    Sprache und Datenverarbeitung. 18(1994) H.1, S.3-15
  9. Schulzki-Haddouti, C.; Brückner, A.: ¬Die Suche nach dem Sinn : Automatische Inhaltsanalyse nicht nur für Geheimdienste (2001) 0.00
    0.0044941492 = product of:
      0.035953194 = sum of:
        0.011921758 = weight(_text_:und in 3133) [ClassicSimilarity], result of:
          0.011921758 = score(doc=3133,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24487628 = fieldWeight in 3133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3133)
        0.012109677 = weight(_text_:der in 3133) [ClassicSimilarity], result of:
          0.012109677 = score(doc=3133,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.2467987 = fieldWeight in 3133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=3133)
        0.011921758 = weight(_text_:und in 3133) [ClassicSimilarity], result of:
          0.011921758 = score(doc=3133,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24487628 = fieldWeight in 3133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3133)
      0.125 = coord(3/24)
    
    Abstract
    Die Geheimdienste stehen vor einer Informationsflut, die sie mit herkömmlichen Mitteln nicht bewältigen können. Neue Möglichkeiten, die in Software-Projekten unter BND-Beteiligung entstanden, sollen das Defizit beseitigen und beliebig verknüpfte Informationen der analyse zugänglich machen
  10. Riesthuis, G.J.A.; Stuurman, P.: Tendenzen in de onderwerpsontsluiting : T.1: Inhoudsanalyse (1989) 0.00
    0.0033915555 = product of:
      0.040698666 = sum of:
        0.014641492 = product of:
          0.043924473 = sum of:
            0.043924473 = weight(_text_:p in 1841) [ClassicSimilarity], result of:
              0.043924473 = score(doc=1841,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.55615246 = fieldWeight in 1841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1841)
          0.33333334 = coord(1/3)
        0.026057174 = weight(_text_:des in 1841) [ClassicSimilarity], result of:
          0.026057174 = score(doc=1841,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.4283554 = fieldWeight in 1841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=1841)
      0.083333336 = coord(2/24)
    
    Footnote
    Übers. des Titels: Trends in subject indexing: contents analysis
  11. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    0.0031841837 = product of:
      0.02547347 = sum of:
        0.0071530542 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.0071530542 = score(doc=2203,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.011167361 = weight(_text_:des in 2203) [ClassicSimilarity], result of:
          0.011167361 = score(doc=2203,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.18358089 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0071530542 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.0071530542 = score(doc=2203,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.125 = coord(3/24)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  12. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.00
    0.0023843516 = product of:
      0.028612217 = sum of:
        0.0143061085 = weight(_text_:und in 7292) [ClassicSimilarity], result of:
          0.0143061085 = score(doc=7292,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.29385152 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.0143061085 = weight(_text_:und in 7292) [ClassicSimilarity], result of:
          0.0143061085 = score(doc=7292,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.29385152 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
      0.083333336 = coord(2/24)
    
  13. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.00
    0.0015823053 = product of:
      0.018987663 = sum of:
        0.008476774 = weight(_text_:der in 354) [ClassicSimilarity], result of:
          0.008476774 = score(doc=354,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.17275909 = fieldWeight in 354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=354)
        0.010510888 = product of:
          0.021021776 = sum of:
            0.021021776 = weight(_text_:29 in 354) [ClassicSimilarity], result of:
              0.021021776 = score(doc=354,freq=2.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.27205724 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.5 = coord(1/2)
      0.083333336 = coord(2/24)
    
    Date
    7. 1.2007 13:00:29
    Footnote
    Mit einer Tabelle zur Inhaltsanalyse (in der Übersetzung von Lebrecht)
  14. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.00
    0.0012048138 = product of:
      0.014457766 = sum of:
        0.008874085 = product of:
          0.026622253 = sum of:
            0.026622253 = weight(_text_:p in 3651) [ClassicSimilarity], result of:
              0.026622253 = score(doc=3651,freq=16.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.33707932 = fieldWeight in 3651, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3651)
          0.33333334 = coord(1/3)
        0.0055836807 = weight(_text_:des in 3651) [ClassicSimilarity], result of:
          0.0055836807 = score(doc=3651,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.091790445 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.083333336 = coord(2/24)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
  15. Laffal, J.: ¬A concept analysis of Jonathan Swift's 'Tale of a tub' and 'Gulliver's travels' (1995) 0.00
    0.0010617601 = product of:
      0.025482243 = sum of:
        0.025482243 = product of:
          0.050964486 = sum of:
            0.050964486 = weight(_text_:29 in 6362) [ClassicSimilarity], result of:
              0.050964486 = score(doc=6362,freq=4.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.6595664 = fieldWeight in 6362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6362)
          0.5 = coord(1/2)
      0.041666668 = coord(1/24)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.5, S.339-361
  16. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.00
    0.0010617601 = product of:
      0.025482243 = sum of:
        0.025482243 = product of:
          0.050964486 = sum of:
            0.050964486 = weight(_text_:29 in 822) [ClassicSimilarity], result of:
              0.050964486 = score(doc=822,freq=4.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.6595664 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.041666668 = coord(1/24)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.4, S.259-270
  17. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.00
    0.001001037 = product of:
      0.024024887 = sum of:
        0.024024887 = product of:
          0.048049774 = sum of:
            0.048049774 = weight(_text_:29 in 2387) [ClassicSimilarity], result of:
              0.048049774 = score(doc=2387,freq=2.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.6218451 = fieldWeight in 2387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2387)
          0.5 = coord(1/2)
      0.041666668 = coord(1/24)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168
  18. Wilson, P.: Subjects and the sense of position (1985) 0.00
    9.7423693E-4 = product of:
      0.011690843 = sum of:
        0.0051765493 = product of:
          0.0155296475 = sum of:
            0.0155296475 = weight(_text_:p in 3648) [ClassicSimilarity], result of:
              0.0155296475 = score(doc=3648,freq=4.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.1966296 = fieldWeight in 3648, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3648)
          0.33333334 = coord(1/3)
        0.0065142936 = weight(_text_:des in 3648) [ClassicSimilarity], result of:
          0.0065142936 = score(doc=3648,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.10708885 = fieldWeight in 3648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3648)
      0.083333336 = coord(2/24)
    
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
    Original in: Wilson, P.: Two kinds of power: an essay on bibliograpical control. Berkeley: Univ. of California Press 1968. S.69-92.
  19. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.00
    7.7551126E-4 = product of:
      0.01861227 = sum of:
        0.01861227 = weight(_text_:des in 6320) [ClassicSimilarity], result of:
          0.01861227 = score(doc=6320,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.30596817 = fieldWeight in 6320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=6320)
      0.041666668 = coord(1/24)
    
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
  20. Baxendale, P.: Content analysis, specification and control (1966) 0.00
    6.972139E-4 = product of:
      0.016733134 = sum of:
        0.016733134 = product of:
          0.0501994 = sum of:
            0.0501994 = weight(_text_:p in 218) [ClassicSimilarity], result of:
              0.0501994 = score(doc=218,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.63560283 = fieldWeight in 218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=218)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)