Search (95 results, page 1 of 5)

  • × type_ss:"x"
  • × year_i:[2010 TO 2020}
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.34
    0.34479102 = product of:
      0.9851172 = sum of:
        0.091462016 = product of:
          0.27438605 = sum of:
            0.27438605 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.27438605 = score(doc=973,freq=2.0), product of:
                0.24410787 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02879306 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.27438605 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.27438605 = score(doc=973,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.018752426 = weight(_text_:und in 973) [ClassicSimilarity], result of:
          0.018752426 = score(doc=973,freq=2.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.29385152 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.27438605 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.27438605 = score(doc=973,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.032992136 = weight(_text_:der in 973) [ClassicSimilarity], result of:
          0.032992136 = score(doc=973,freq=6.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.5129615 = fieldWeight in 973, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.018752426 = weight(_text_:und in 973) [ClassicSimilarity], result of:
          0.018752426 = score(doc=973,freq=2.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.29385152 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.27438605 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.27438605 = score(doc=973,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.35 = coord(7/20)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
    Footnote
    Inauguraldissertation zur Erlangung der Doktorwürde Vorgelegt der Philosophischen Fakultät I der Humboldt-Universität zu Berlin.
  2. Siever, C.M.: Multimodale Kommunikation im Social Web : Forschungsansätze und Analysen zu Text-Bild-Relationen (2015) 0.25
    0.24765567 = product of:
      0.49531135 = sum of:
        0.19451343 = weight(_text_:15860 in 4056) [ClassicSimilarity], result of:
          0.19451343 = score(doc=4056,freq=4.0), product of:
            0.26774642 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.02879306 = queryNorm
            0.72648376 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.06103005 = weight(_text_:medien in 4056) [ClassicSimilarity], result of:
          0.06103005 = score(doc=4056,freq=6.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.45034546 = fieldWeight in 4056, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.013533398 = weight(_text_:und in 4056) [ClassicSimilarity], result of:
          0.013533398 = score(doc=4056,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.21206908 = fieldWeight in 4056, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.022467308 = product of:
          0.044934615 = sum of:
            0.044934615 = weight(_text_:kommunikationswissenschaften in 4056) [ClassicSimilarity], result of:
              0.044934615 = score(doc=4056,freq=2.0), product of:
                0.15303716 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02879306 = queryNorm
                0.29361898 = fieldWeight in 4056, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4056)
          0.5 = coord(1/2)
        0.0194408 = weight(_text_:der in 4056) [ClassicSimilarity], result of:
          0.0194408 = score(doc=4056,freq=12.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.30226544 = fieldWeight in 4056, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.118864596 = weight(_text_:kommunikation in 4056) [ClassicSimilarity], result of:
          0.118864596 = score(doc=4056,freq=16.0), product of:
            0.14799947 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02879306 = queryNorm
            0.8031421 = fieldWeight in 4056, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.013533398 = weight(_text_:und in 4056) [ClassicSimilarity], result of:
          0.013533398 = score(doc=4056,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.21206908 = fieldWeight in 4056, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.017251236 = weight(_text_:des in 4056) [ClassicSimilarity], result of:
          0.017251236 = score(doc=4056,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.21635216 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.004162154 = weight(_text_:in in 4056) [ClassicSimilarity], result of:
          0.004162154 = score(doc=4056,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.10626988 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.030515024 = product of:
          0.06103005 = sum of:
            0.06103005 = weight(_text_:medien in 4056) [ClassicSimilarity], result of:
              0.06103005 = score(doc=4056,freq=6.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.45034546 = fieldWeight in 4056, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4056)
          0.5 = coord(1/2)
      0.5 = coord(10/20)
    
    Abstract
    Multimodalität ist ein typisches Merkmal der Kommunikation im Social Web. Der Fokus dieses Bandes liegt auf der Kommunikation in Foto-Communitys, insbesondere auf den beiden kommunikativen Praktiken des Social Taggings und des Verfassens von Notizen innerhalb von Bildern. Bei den Tags stehen semantische Text-Bild-Relationen im Vordergrund: Tags dienen der Wissensrepräsentation, eine adäquate Versprachlichung der Bilder ist folglich unabdingbar. Notizen-Bild-Relationen sind aus pragmatischer Perspektive von Interesse: Die Informationen eines Kommunikats werden komplementär auf Text und Bild verteilt, was sich in verschiedenen sprachlichen Phänomenen niederschlägt. Ein diachroner Vergleich mit der Postkartenkommunikation sowie ein Exkurs zur Kommunikation mit Emojis runden das Buch ab.
    BK
    05.38 Neue elektronische Medien Kommunikationswissenschaft
    Classification
    AP 15860
    05.38 Neue elektronische Medien Kommunikationswissenschaft
    Field
    Kommunikationswissenschaften
    RSWK
    Social Media / Multimodalität / Kommunikation / Social Tagging (DNB)
    Text / Bild / Computerunterstützte Kommunikation / Soziale Software (SBB)
    RVK
    AP 15860
    Series
    Sprache - Medien - Innovationen ; 8
    Subject
    Social Media / Multimodalität / Kommunikation / Social Tagging (DNB)
    Text / Bild / Computerunterstützte Kommunikation / Soziale Software (SBB)
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.21
    0.20826851 = product of:
      0.46281892 = sum of:
        0.038109176 = product of:
          0.11432753 = sum of:
            0.11432753 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.11432753 = score(doc=4388,freq=2.0), product of:
                0.24410787 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02879306 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.11432753 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.11432753 = score(doc=4388,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.013533398 = weight(_text_:und in 4388) [ClassicSimilarity], result of:
          0.013533398 = score(doc=4388,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.21206908 = fieldWeight in 4388, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.11432753 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.11432753 = score(doc=4388,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.026322968 = weight(_text_:der in 4388) [ClassicSimilarity], result of:
          0.026322968 = score(doc=4388,freq=22.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.40926933 = fieldWeight in 4388, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.013533398 = weight(_text_:und in 4388) [ClassicSimilarity], result of:
          0.013533398 = score(doc=4388,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.21206908 = fieldWeight in 4388, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.021128364 = weight(_text_:des in 4388) [ClassicSimilarity], result of:
          0.021128364 = score(doc=4388,freq=6.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.2649762 = fieldWeight in 4388, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.11432753 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.11432753 = score(doc=4388,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.0072090626 = weight(_text_:in in 4388) [ClassicSimilarity], result of:
          0.0072090626 = score(doc=4388,freq=12.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.18406484 = fieldWeight in 4388, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.45 = coord(9/20)
    
    Abstract
    Werden Maschinen mit Begriffen beschrieben, die ursprünglich der Beschreibung des Menschen dienen, so liegt zunächst der Verdacht nahe, dass jene Maschinen spezifischmenschliche Fähigkeiten oder Eigenschaften besitzen. Für körperliche Fähigkeiten, die mechanisch nachgeahmt werden, hat sich in der Alltagssprache eine anthropomorphisierende Sprechweise bereits etabliert. So wird kaum in Frage gestellt, dass bestimmte Maschinen weben, backen, sich bewegen oder arbeiten können. Bei nichtkörperlichen Eigenschaften, etwa kognitiver, sozialer oder moralischer Art sieht dies jedoch anders aus. Dass mittlerweile intelligente und rechnende Maschinen im alltäglichen Sprachgebrauch Eingang gefunden haben, wäre jedoch undenkbar ohne den langjährigen Diskurs über Künstliche Intelligenz, welcher insbesondere die zweite Hälfte des vergangenen Jahrhunderts geprägt hat. In jüngster Zeit ist es der Autonomiebegriff, welcher zunehmend Verwendung zur Beschreibung neuer Technologien findet, wie etwa "autonome mobile Roboter" oder "autonome Systeme". Dem Begriff nach rekurriert die "Autonomie" jener Technologien auf eine bestimmte Art technologischen Fortschritts, die von der Fähigkeit zur Selbstgesetzgebung herrührt. Dies wirft aus philosophischer Sicht jedoch die Frage auf, wie die Selbstgesetzgebung in diesem Fall definiert ist, zumal sich der Autonomiebegriff in der Philosophie auf die politische oder moralische Selbstgesetzgebung von Menschen oder Menschengruppen beziehungsweise ihre Handlungen bezieht. Im Handbuch Robotik hingegen führt der Autor geradezu beiläufig die Bezeichnung "autonom" ein, indem er prognostiziert, dass "[.] autonome Roboter in Zukunft sogar einen Großteil der Altenbetreuung übernehmen werden."
    Content
    Redigierte Version der Magisterarbeit, Karlsruhe, KIT, Inst. für Philosophie, 2012.
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.11
    0.10779485 = product of:
      0.4311794 = sum of:
        0.13719302 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.13719302 = score(doc=563,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.13719302 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.13719302 = score(doc=563,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.13719302 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.13719302 = score(doc=563,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.007897133 = weight(_text_:in in 563) [ClassicSimilarity], result of:
          0.007897133 = score(doc=563,freq=10.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.20163295 = fieldWeight in 563, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.011703186 = product of:
          0.023406371 = sum of:
            0.023406371 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.023406371 = score(doc=563,freq=2.0), product of:
                0.10082839 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02879306 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  5. Sommer, M.: Automatische Generierung von DDC-Notationen für Hochschulveröffentlichungen (2012) 0.11
    0.10736195 = product of:
      0.26840487 = sum of:
        0.059796993 = weight(_text_:medien in 587) [ClassicSimilarity], result of:
          0.059796993 = score(doc=587,freq=4.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.44124663 = fieldWeight in 587, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.026519936 = weight(_text_:und in 587) [ClassicSimilarity], result of:
          0.026519936 = score(doc=587,freq=16.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.41556883 = fieldWeight in 587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.03011756 = weight(_text_:der in 587) [ClassicSimilarity], result of:
          0.03011756 = score(doc=587,freq=20.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.46826762 = fieldWeight in 587, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.07131875 = weight(_text_:kommunikation in 587) [ClassicSimilarity], result of:
          0.07131875 = score(doc=587,freq=4.0), product of:
            0.14799947 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02879306 = queryNorm
            0.48188522 = fieldWeight in 587, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.026519936 = weight(_text_:und in 587) [ClassicSimilarity], result of:
          0.026519936 = score(doc=587,freq=16.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.41556883 = fieldWeight in 587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.020701483 = weight(_text_:des in 587) [ClassicSimilarity], result of:
          0.020701483 = score(doc=587,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.25962257 = fieldWeight in 587, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.003531705 = weight(_text_:in in 587) [ClassicSimilarity], result of:
          0.003531705 = score(doc=587,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.09017298 = fieldWeight in 587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.029898496 = product of:
          0.059796993 = sum of:
            0.059796993 = weight(_text_:medien in 587) [ClassicSimilarity], result of:
              0.059796993 = score(doc=587,freq=4.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.44124663 = fieldWeight in 587, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.046875 = fieldNorm(doc=587)
          0.5 = coord(1/2)
      0.4 = coord(8/20)
    
    Abstract
    Das Thema dieser Bachelorarbeit ist die automatische Generierung von Notationen der Dewey-Dezimalklassifikation für Metadaten. Die Metadaten sind im Dublin-Core-Format und stammen vom Server für wissenschaftliche Schriften der Hochschule Hannover. Zu Beginn erfolgt eine allgemeine Einführung über die Methoden und Hauptanwendungsbereiche des automatischen Klassifizierens. Danach werden die Dewey-Dezimalklassifikation und der Prozess der Metadatengewinnung beschrieben. Der theoretische Teil endet mit der Beschreibung von zwei Projekten. In dem ersten Projekt wurde ebenfalls versucht Metadaten mit Notationen der Dewey-Dezimalklassifikation anzureichern. Das Ergebnis des zweiten Projekts ist eine Konkordanz zwischen der Schlagwortnormdatei und der Dewey-Dezimalklassifikation. Diese Konkordanz wurde im praktischen Teil dieser Arbeit dazu benutzt um automatisch Notationen der Dewey-Dezimalklassifikation zu vergeben.
    Content
    Vgl. unter: http://opus.bsz-bw.de/fhhv/volltexte/2012/397/pdf/Bachelorarbeit_final_Korrektur01.pdf. Bachelorarbeit, Hochschule Hannover, Fakultät III - Medien, Information und Design, Abteilung Information und Kommunikation, Studiengang Informationsmanagement
    Imprint
    Hannover : Hochschule Hannover, Fakultät III - Medien, Information und Design, Abteilung Information und Kommunikation
  6. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.11
    0.106397815 = product of:
      0.42559126 = sum of:
        0.030487342 = product of:
          0.09146202 = sum of:
            0.09146202 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.09146202 = score(doc=5820,freq=2.0), product of:
                0.24410787 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02879306 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.12934683 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.12934683 = score(doc=5820,freq=4.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.12934683 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.12934683 = score(doc=5820,freq=4.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.12934683 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.12934683 = score(doc=5820,freq=4.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.00706341 = weight(_text_:in in 5820) [ClassicSimilarity], result of:
          0.00706341 = score(doc=5820,freq=18.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.18034597 = fieldWeight in 5820, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.25 = coord(5/20)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  7. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.10
    0.09735401 = product of:
      0.38941604 = sum of:
        0.038109176 = product of:
          0.11432753 = sum of:
            0.11432753 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.11432753 = score(doc=4997,freq=2.0), product of:
                0.24410787 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02879306 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.11432753 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.11432753 = score(doc=4997,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.11432753 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.11432753 = score(doc=4997,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.11432753 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.11432753 = score(doc=4997,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.008324308 = weight(_text_:in in 4997) [ClassicSimilarity], result of:
          0.008324308 = score(doc=4997,freq=16.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.21253976 = fieldWeight in 4997, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.25 = coord(5/20)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  8. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.10
    0.09631348 = product of:
      0.3852539 = sum of:
        0.038109176 = product of:
          0.11432753 = sum of:
            0.11432753 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.11432753 = score(doc=855,freq=2.0), product of:
                0.24410787 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02879306 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.11432753 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.11432753 = score(doc=855,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.11432753 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.11432753 = score(doc=855,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.11432753 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.11432753 = score(doc=855,freq=2.0), product of:
            0.24410787 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02879306 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.004162154 = weight(_text_:in in 855) [ClassicSimilarity], result of:
          0.004162154 = score(doc=855,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.10626988 = fieldWeight in 855, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.25 = coord(5/20)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  9. Holze, J.: Digitales Wissen : bildungsrelevante Relationen zwischen Strukturen digitaler Medien und Konzepten von Wissen (2017) 0.09
    0.090752214 = product of:
      0.25929204 = sum of:
        0.097648084 = weight(_text_:medien in 3984) [ClassicSimilarity], result of:
          0.097648084 = score(doc=3984,freq=6.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.72055274 = fieldWeight in 3984, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0625 = fieldNorm(doc=3984)
        0.017679956 = weight(_text_:und in 3984) [ClassicSimilarity], result of:
          0.017679956 = score(doc=3984,freq=4.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.27704588 = fieldWeight in 3984, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3984)
        0.03594769 = product of:
          0.07189538 = sum of:
            0.07189538 = weight(_text_:kommunikationswissenschaften in 3984) [ClassicSimilarity], result of:
              0.07189538 = score(doc=3984,freq=2.0), product of:
                0.15303716 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02879306 = queryNorm
                0.46979034 = fieldWeight in 3984, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3984)
          0.5 = coord(1/2)
        0.021994755 = weight(_text_:der in 3984) [ClassicSimilarity], result of:
          0.021994755 = score(doc=3984,freq=6.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.34197432 = fieldWeight in 3984, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=3984)
        0.017679956 = weight(_text_:und in 3984) [ClassicSimilarity], result of:
          0.017679956 = score(doc=3984,freq=4.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.27704588 = fieldWeight in 3984, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3984)
        0.019517547 = weight(_text_:des in 3984) [ClassicSimilarity], result of:
          0.019517547 = score(doc=3984,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.24477452 = fieldWeight in 3984, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=3984)
        0.048824042 = product of:
          0.097648084 = sum of:
            0.097648084 = weight(_text_:medien in 3984) [ClassicSimilarity], result of:
              0.097648084 = score(doc=3984,freq=6.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.72055274 = fieldWeight in 3984, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3984)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Wie verändern sich Begriffe von Wissen und Wissensgenerierung im Zeitalter digitaler Medien? Welche Implikationen können daraus für den Begriff der (Strukturalen Medien-)Bildung innerhalb einer Disziplin der Medienpädagogik abgeleitet werden? Inwieweit findet eine Transformation hin zu einem qualitativ anderen Wissen, gar einem digitalen Wissen statt?
    Field
    Kommunikationswissenschaften
    Footnote
    Dissertation zur Erlangung des akademischen Grades Dr. phil., genehmigt durch die Fakultät für Humanwissenschaften der Otto-von-Guericke-Universität Magdeburg.
  10. Walther, T.: Erschließung historischer Bestände mit RDA (2015) 0.09
    0.08541897 = product of:
      0.21354742 = sum of:
        0.049830828 = weight(_text_:medien in 2437) [ClassicSimilarity], result of:
          0.049830828 = score(doc=2437,freq=4.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.36770552 = fieldWeight in 2437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.027066795 = weight(_text_:und in 2437) [ClassicSimilarity], result of:
          0.027066795 = score(doc=2437,freq=24.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.42413816 = fieldWeight in 2437, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.022448301 = weight(_text_:der in 2437) [ClassicSimilarity], result of:
          0.022448301 = score(doc=2437,freq=16.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.34902605 = fieldWeight in 2437, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.042024978 = weight(_text_:kommunikation in 2437) [ClassicSimilarity], result of:
          0.042024978 = score(doc=2437,freq=2.0), product of:
            0.14799947 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02879306 = queryNorm
            0.28395358 = fieldWeight in 2437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.027066795 = weight(_text_:und in 2437) [ClassicSimilarity], result of:
          0.027066795 = score(doc=2437,freq=24.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.42413816 = fieldWeight in 2437, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.017251236 = weight(_text_:des in 2437) [ClassicSimilarity], result of:
          0.017251236 = score(doc=2437,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.21635216 = fieldWeight in 2437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.0029430876 = weight(_text_:in in 2437) [ClassicSimilarity], result of:
          0.0029430876 = score(doc=2437,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.07514416 = fieldWeight in 2437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2437)
        0.024915414 = product of:
          0.049830828 = sum of:
            0.049830828 = weight(_text_:medien in 2437) [ClassicSimilarity], result of:
              0.049830828 = score(doc=2437,freq=4.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.36770552 = fieldWeight in 2437, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2437)
          0.5 = coord(1/2)
      0.4 = coord(8/20)
    
    Abstract
    Die vorliegende Arbeit befasst sich mit der Erschließung historischer Bestände nach RDA. Inhaltlich begrenzt sie sich auf ausgewählte spezifische Merkmale alter Drucke und ihrer Formalerschließung mit RDA. Die Umstellung auf das neue Regelwerk, welches Anwen-dungsrichtlinien für alle Materialien vereinen soll, und einige in den Bibliotheksgremien diskutierte Aspekte wie z. B. "cataloger's judgement" haben die zentrale Frage dieser Arbeit veranlasst: Ist RDA für die Erschließung alter Drucke geeignet? Die Arbeit untersucht spezifische Merkmale alter Drucke. Sie betrachtet die bestehende Erschließungspraxis alter Drucke und geht auf die Grundlagen und wesentliche Inhalte von RDA ein. Zu wissenschaftlichen Methoden der Arbeit gehören der Regelwerkevergleich und das Experteninterview. Die Gegenüberstellung von RDA und den RAK-WB lässt erkennen, dass RDA sich prinzipiell für die Erschließung alter Drucke eignet und Elemente der bibliographischen Beschreibung ähnlich wie die RAK-WB abbildet. Wegen des allgemeinen Charakters sind einige Richtlinien von RDA noch zu konkretisieren. Mehrwert gegenüber den RAK-WB versprechen die normierten Sucheinstiege, die Erfassung von Werken und Beziehungen. Das Interview mit Christoph Boveland, dem Experten auf dem Gebiet Katalogisierung Alter Drucke, bringt neue Erkenntnisse über geplante Empfehlungen zur Erschließung alter Drucke mit RDA, Erweiterung des Standardelemente-Set usw. Basierend auf den Ergebnissen der Gegenüberstellung und der Meinung von Christoph Boveland wird eine Aussage zur Entwicklung der Lehrveranstaltung "Formalerschließung historischer Bestände" an der Hochschule Hannover getroffen.
    Content
    Bachelorarbeit, Hannover: Hochschule Hannover; Fakultät III: Medien, Information und Design, im Studiengang Informationsmanagement, 2015. Vgl.: http://serwiss.bib.hs-hannover.de/frontdoor/index/index/docId/673.
    Imprint
    Hannover : Hochschule Hannover; Fakultät III: Medien, Information und Design, Abteilung Information und Kommunikation
  11. Lamparter, A.: Kompetenzprofil von Information Professionals in Unternehmen (2015) 0.07
    0.07039151 = product of:
      0.2011186 = sum of:
        0.069763154 = weight(_text_:medien in 769) [ClassicSimilarity], result of:
          0.069763154 = score(doc=769,freq=4.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.51478773 = fieldWeight in 769, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
        0.026794761 = weight(_text_:und in 769) [ClassicSimilarity], result of:
          0.026794761 = score(doc=769,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.41987535 = fieldWeight in 769, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
        0.015713813 = weight(_text_:der in 769) [ClassicSimilarity], result of:
          0.015713813 = score(doc=769,freq=4.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.24431825 = fieldWeight in 769, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
        0.026794761 = weight(_text_:und in 769) [ClassicSimilarity], result of:
          0.026794761 = score(doc=769,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.41987535 = fieldWeight in 769, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
        0.017077852 = weight(_text_:des in 769) [ClassicSimilarity], result of:
          0.017077852 = score(doc=769,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.2141777 = fieldWeight in 769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
        0.010092689 = weight(_text_:in in 769) [ClassicSimilarity], result of:
          0.010092689 = score(doc=769,freq=12.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.2576908 = fieldWeight in 769, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
        0.034881577 = product of:
          0.069763154 = sum of:
            0.069763154 = weight(_text_:medien in 769) [ClassicSimilarity], result of:
              0.069763154 = score(doc=769,freq=4.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.51478773 = fieldWeight in 769, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=769)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Information Professionals sind in Unternehmen für den professionellen und strategischen Umgang mit Informationen verantwortlich. Da es keine allgemeingültige Definition für diese Berufsgruppe gibt, wird in der Masterarbeit eine Begriffsbestimmung unternommen. Mit Hilfe dreier Methoden, einer Auswertung von relevanter Fachliteratur, die Untersuchung von einschlägigen Stellenausschreibungen und das Führen von Experteninterviews, wird ein Kompetenzprofil für Information Professionals erstellt. 16 Kompetenzen in den Bereichen Fach-, Methoden-, Sozial- und persönliche Kompetenzen geben eine Orientierung über vorhandene Fähigkeiten dieser Berufsgruppe für Personalfachleute, Vorgesetzte und Information Professionals selbst.
    Content
    Masterarbeit an der Hochschule Hannover, Fakultät III - Medien, Information und Design. Trägerin des VFI-Förderpreises 2015, Vgl.: urn:nbn:de:bsz:960-opus4-5280. http://serwiss.bib.hs-hannover.de/frontdoor/index/index/docId/528. Vgl. auch: Knoll, A. (geb. Lamparter): Kompetenzprofil von Information Professionals in Unternehmen. In: Young information professionals. 1(2016) S.1-11.
    Imprint
    Hannover : Hochschule, Fakultät III - Medien, Information und Design
  12. Ruther, D.: Möglichkeit zur Realisierung des FRBR-Modells im Rahmen des relationalen Datenbankmodells (2015) 0.07
    0.070294045 = product of:
      0.20084013 = sum of:
        0.056377143 = weight(_text_:medien in 1747) [ClassicSimilarity], result of:
          0.056377143 = score(doc=1747,freq=2.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.4160113 = fieldWeight in 1747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0625 = fieldNorm(doc=1747)
        0.021653436 = weight(_text_:und in 1747) [ClassicSimilarity], result of:
          0.021653436 = score(doc=1747,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.33931053 = fieldWeight in 1747, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1747)
        0.03594769 = product of:
          0.07189538 = sum of:
            0.07189538 = weight(_text_:kommunikationswissenschaften in 1747) [ClassicSimilarity], result of:
              0.07189538 = score(doc=1747,freq=2.0), product of:
                0.15303716 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02879306 = queryNorm
                0.46979034 = fieldWeight in 1747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1747)
          0.5 = coord(1/2)
        0.021653436 = weight(_text_:und in 1747) [ClassicSimilarity], result of:
          0.021653436 = score(doc=1747,freq=6.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.33931053 = fieldWeight in 1747, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1747)
        0.027601978 = weight(_text_:des in 1747) [ClassicSimilarity], result of:
          0.027601978 = score(doc=1747,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.34616345 = fieldWeight in 1747, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=1747)
        0.00941788 = weight(_text_:in in 1747) [ClassicSimilarity], result of:
          0.00941788 = score(doc=1747,freq=8.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.24046129 = fieldWeight in 1747, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1747)
        0.028188571 = product of:
          0.056377143 = sum of:
            0.056377143 = weight(_text_:medien in 1747) [ClassicSimilarity], result of:
              0.056377143 = score(doc=1747,freq=2.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.4160113 = fieldWeight in 1747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1747)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    "Functional Requirements for Bibliographic Records" bezeichnet ein Datenmodell, welches es ermöglicht bibliographische Datensätze hierarchisch darzustellen. Dazu werden Entitäten definiert, welche untereinander in Verbindung stehen und so die katalogisierten Medien beschreiben. In dieser Arbeit wird das FRBR-Modell in Form einer relationalen Datenbank realisiert. Dazu wird das Programm SQL-Server 2014 genutzt, um es später mit dem linearen Datenbanksystem "Midos6" in Hinblick auf Datenmodulation und daraus resultierende Darstellungsmöglichkeiten zu vergleichen.
    Content
    Bachelorarbeit, Studiengang Bibliothekswesen, Fakultät für Informations- und Kommunikationswissenschaften, Fachhochschule Köln
  13. Pollmeier, M.: Verlagsschlagwörter als Grundlage für den Einsatz eines maschinellen Verfahrens zur verbalen Erschließung der Kinder- und Jugendliteratur durch die Deutsche Nationalbibliothek : eine Datenanalyse (2019) 0.06
    0.06399135 = product of:
      0.18283243 = sum of:
        0.049830828 = weight(_text_:medien in 1081) [ClassicSimilarity], result of:
          0.049830828 = score(doc=1081,freq=4.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.36770552 = fieldWeight in 1081, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
        0.031254046 = weight(_text_:und in 1081) [ClassicSimilarity], result of:
          0.031254046 = score(doc=1081,freq=32.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.48975256 = fieldWeight in 1081, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
        0.027493443 = weight(_text_:der in 1081) [ClassicSimilarity], result of:
          0.027493443 = score(doc=1081,freq=24.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.42746788 = fieldWeight in 1081, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
        0.031254046 = weight(_text_:und in 1081) [ClassicSimilarity], result of:
          0.031254046 = score(doc=1081,freq=32.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.48975256 = fieldWeight in 1081, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
        0.012198467 = weight(_text_:des in 1081) [ClassicSimilarity], result of:
          0.012198467 = score(doc=1081,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.15298408 = fieldWeight in 1081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
        0.005886175 = weight(_text_:in in 1081) [ClassicSimilarity], result of:
          0.005886175 = score(doc=1081,freq=8.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.15028831 = fieldWeight in 1081, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
        0.024915414 = product of:
          0.049830828 = sum of:
            0.049830828 = weight(_text_:medien in 1081) [ClassicSimilarity], result of:
              0.049830828 = score(doc=1081,freq=4.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.36770552 = fieldWeight in 1081, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1081)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Die Inhaltserschließung durch Schlagwörter wird aktuell in vielen Öffentlichen Bibliotheken Deutschlands zurückgefahren. Aufgrund von Personalmangel und den vielfältigen anderen bibliothekarischen Dienstleistungen, die für die Benutzer zu leisten sind, kommt sie oft zu kurz. Die Deutsche Nationalbibliothek unterstützte diese Bibliotheken bisher als wichtigster Datenlieferant, jedoch stellte sie 2017 die intellektuelle Inhaltserschließung der Kinder- und Jugendliteratur und der Belletristik ein. Um diese problematische Situation zu verbessern, wird aktuell in der Deutschen Nationalbibliothek ein Verfahren erprobt, das aus Schlagwörtern von Verlagen maschinell bibliothekarische Schlagwörter aus der Gemeinsamen Normdatei generiert. Auf die Titel der Kinder- und Jugendliteratur aus den Jahren 2018 und 2019 wurde es bereits angewendet. In dieser Arbeit geht es um eine erste Analyse dieser Erschließungsergebnisse, um Aussagen über die Nützlichkeit der Verlagsschlagwörter und des automatischen Verfahrens zu treffen. Im theoretischen Teil werden einerseits die Inhaltserschließung im bibliothekarischen Bereich und deren aktuelle Entwicklungen hinsichtlich der Automatisierung beschrieben. Andererseits wird näher auf die Erschließungspraxis in der Deutschen Nationalbibliothek hinsichtlich der Automatisierung und der Kinder- und Jugendliteratur eingegangen. Im Analyseteil werden sowohl die Verlagsschlagwörter als auch die bibliothekarischen Schlagwörter nach festgelegten Kriterien untersucht und schließlich miteinander verglichen.
    Footnote
    Bachelorarbeit an der Hochschule für Technik, Wirtschaft und Kultur Leipzig Fakultät Informatik und Medien Studiengang Bibliotheks- und Informationswissenschaft.
    Imprint
    Leipzig : Hochschule für Technik, Wirtschaft und Kultur / Fakultät Informatik und Medien
  14. Bös, K.: Aspektorientierte Inhaltserschließung von Romanen und Bildern : ein Vergleich der Ansätze von Annelise Mark Pejtersen und Sara Shatford (2012) 0.06
    0.061835937 = product of:
      0.17667411 = sum of:
        0.049830828 = weight(_text_:medien in 400) [ClassicSimilarity], result of:
          0.049830828 = score(doc=400,freq=4.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.36770552 = fieldWeight in 400, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=400)
        0.027066795 = weight(_text_:und in 400) [ClassicSimilarity], result of:
          0.027066795 = score(doc=400,freq=24.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.42413816 = fieldWeight in 400, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=400)
        0.022467308 = product of:
          0.044934615 = sum of:
            0.044934615 = weight(_text_:kommunikationswissenschaften in 400) [ClassicSimilarity], result of:
              0.044934615 = score(doc=400,freq=2.0), product of:
                0.15303716 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02879306 = queryNorm
                0.29361898 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=400)
          0.5 = coord(1/2)
        0.0194408 = weight(_text_:der in 400) [ClassicSimilarity], result of:
          0.0194408 = score(doc=400,freq=12.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.30226544 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=400)
        0.027066795 = weight(_text_:und in 400) [ClassicSimilarity], result of:
          0.027066795 = score(doc=400,freq=24.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.42413816 = fieldWeight in 400, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=400)
        0.005886175 = weight(_text_:in in 400) [ClassicSimilarity], result of:
          0.005886175 = score(doc=400,freq=8.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.15028831 = fieldWeight in 400, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=400)
        0.024915414 = product of:
          0.049830828 = sum of:
            0.049830828 = weight(_text_:medien in 400) [ClassicSimilarity], result of:
              0.049830828 = score(doc=400,freq=4.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.36770552 = fieldWeight in 400, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Für die inhaltliche Erschließung von Sach- und Fachliteratur stehen heutzutage etablierte Verfahren und Standards zur Verfügung. Anders verhält es sich dagegen mit der Erschließung von Schöner Literatur und Bildern. Beide Medien sind sehr verschieden und haben doch eines gemeinsam. Sie lassen sich mit den Regeln für Sach- und Fachliteratur nicht zufriedenstellend inhaltlich erschließen. Dieses Problem erkannten in den 1970er und 80er Jahren beide Autoren, deren Methoden ich hier verglichen habe. Annelise Mark Pejtersen bemühte sich um eine Lösung für die Schöne Literatur und wählte dabei einen empirischen Ansatz. Sara Shatford versuchte durch theoretische Überlegungen eine Lösung für Bilder zu erarbeiten. Der empirische wie der theoretische Ansatz führten zu Methoden, die das jeweilige Medium unter verschiedenen Aspekten betrachten. Diese Aspekten basieren in beiden Fällen auf denselben Fragen. Dennoch unterscheiden sie sich stark voneinander sowohl im Hinblick auf die Inhalte, die sie aufnehmen können, als auch hinsichtlich ihrer Struktur. Eine Anwendung einer der Methoden auf das jeweils andere Medium erscheint daher nicht sinnvoll. In dieser Arbeit werden die Methoden von Pejtersen und Shatford zunächst einzeln erläutert. Im Anschluss werden die Aspekte beider Methoden vergleichend gegenübergestellt. Dazu werden ausgewählte Beispiele mit beiden Methoden erschlossen. Abschließend wird geprüft, ob die wechselseitige Erschließung, wie sie im Vergleich angewendet wurde, in der Praxis sinnvoll ist und ob es Medien gibt, deren Erschließung mit beiden Methoden interessant wäre.
    Imprint
    Köln : Fachhochschule / Fakultät für Informations- und Kommunikationswissenschaften
  15. Glaesener, L.: Automatisches Indexieren einer informationswissenschaftlichen Datenbank mit Mehrwortgruppen (2012) 0.06
    0.060873833 = product of:
      0.17392524 = sum of:
        0.027954467 = weight(_text_:und in 401) [ClassicSimilarity], result of:
          0.027954467 = score(doc=401,freq=10.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.438048 = fieldWeight in 401, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=401)
        0.05083771 = product of:
          0.10167542 = sum of:
            0.10167542 = weight(_text_:kommunikationswissenschaften in 401) [ClassicSimilarity], result of:
              0.10167542 = score(doc=401,freq=4.0), product of:
                0.15303716 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02879306 = queryNorm
                0.6643839 = fieldWeight in 401, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0625 = fieldNorm(doc=401)
          0.5 = coord(1/2)
        0.025397355 = weight(_text_:der in 401) [ClassicSimilarity], result of:
          0.025397355 = score(doc=401,freq=8.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.3948779 = fieldWeight in 401, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=401)
        0.027954467 = weight(_text_:und in 401) [ClassicSimilarity], result of:
          0.027954467 = score(doc=401,freq=10.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.438048 = fieldWeight in 401, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=401)
        0.019517547 = weight(_text_:des in 401) [ClassicSimilarity], result of:
          0.019517547 = score(doc=401,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.24477452 = fieldWeight in 401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=401)
        0.006659447 = weight(_text_:in in 401) [ClassicSimilarity], result of:
          0.006659447 = score(doc=401,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.17003182 = fieldWeight in 401, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=401)
        0.015604248 = product of:
          0.031208497 = sum of:
            0.031208497 = weight(_text_:22 in 401) [ClassicSimilarity], result of:
              0.031208497 = score(doc=401,freq=2.0), product of:
                0.10082839 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02879306 = queryNorm
                0.30952093 = fieldWeight in 401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=401)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Ein Bericht über die Ergebnisse und die Prozessanalyse einer automatischen Indexierung mit Mehrwortgruppen. Diese Bachelorarbeit beschreibt, inwieweit der Inhalt informationswissenschaftlicher Fachtexte durch informationswissenschaftliches Fachvokabular erschlossen werden kann und sollte und dass in diesen wissenschaftlichen Texten ein Großteil der fachlichen Inhalte in Mehrwortgruppen vorkommt. Die Ergebnisse wurden durch eine automatische Indexierung mit Mehrwortgruppen mithilfe des Programme Lingo an einer informationswissenschaftlichen Datenbank ermittelt.
    Content
    Bachelorarbeit im Studiengang Bibliothekswesen der Fakultät für Informations- und Kommunikationswissenschaften an der Fachhochschule Köln.
    Date
    11. 9.2012 19:43:22
    Imprint
    Köln : Fachhochschule / Fakultät für Informations- und Kommunikationswissenschaften
  16. Waldhör, A.: Erstellung einer Konkordanz zwischen Basisklassifikation (BK) und Regensburger Verbundklassifikation (RVK) für den Fachbereich Recht (2012) 0.06
    0.057135247 = product of:
      0.16324356 = sum of:
        0.049830828 = weight(_text_:medien in 596) [ClassicSimilarity], result of:
          0.049830828 = score(doc=596,freq=4.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.36770552 = fieldWeight in 596, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.019139115 = weight(_text_:und in 596) [ClassicSimilarity], result of:
          0.019139115 = score(doc=596,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.29991096 = fieldWeight in 596, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.029696314 = weight(_text_:der in 596) [ClassicSimilarity], result of:
          0.029696314 = score(doc=596,freq=28.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.46171808 = fieldWeight in 596, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.019139115 = weight(_text_:und in 596) [ClassicSimilarity], result of:
          0.019139115 = score(doc=596,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.29991096 = fieldWeight in 596, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.012198467 = weight(_text_:des in 596) [ClassicSimilarity], result of:
          0.012198467 = score(doc=596,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.15298408 = fieldWeight in 596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.008324308 = weight(_text_:in in 596) [ClassicSimilarity], result of:
          0.008324308 = score(doc=596,freq=16.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.21253976 = fieldWeight in 596, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=596)
        0.024915414 = product of:
          0.049830828 = sum of:
            0.049830828 = weight(_text_:medien in 596) [ClassicSimilarity], result of:
              0.049830828 = score(doc=596,freq=4.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.36770552 = fieldWeight in 596, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=596)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Ziel der vorliegenden Arbeit war die Erstellung einer Konkordanz zwischen der Regensburger Verbundklassifikation (RVK) und der Basisklassifikation (BK) für den Fachbereich Recht. Die Erstellung von Konkordanzen ist im bibliothekarischen Bereichmehrfach von Interesse, da einerseits Notationen verschiedener Klassifikationssysteme zusammengeführt werden und somit eine höhere Datendichte erreicht werden kann. Zum anderen können Konkordanzen in der Suchmaschinentechnologie Primo als "Werkzeug" bei der facettierten Suche eingesetzt werden. Die Arbeit gliedert sich in zwei Teile. Der erste (theoretische) Teil beschäftigt sich mit Klassifikationen als Hilfsmittel für die Freihandaufstellung und als Teil der klassifikatorischen Sacherschließung. Im Anschluss daran werden drei große Klassifikationssysteme, die im Rahmen der Sacherschließung in Österreich eine wesentliche Rolle spielen (Verbundklassifikationen des OBV), dargestellt. Die Basisklassifikation und die Regensburger Verbundklassifikation werden kurz beschrieben, es wird untersucht wie juristische Medien in diesen Klassifikationen abgebildet werden. In diesem Zusammenhang wird auch der aktuelle Stand der RVK Erweiterung betreffend österreichisches Recht erörtert. Die Dewey - Dezimal - Klassifikation (DDC) wird auf ihre generelle Eignung als Klassifikation für juristische Medien genauer, anhand mehrerer praktischer Beispiele, untersucht. Insbesondere wird die "Konkordanzfähigkeit" der DDC im Hinblick auf die beiden anderen Systeme betreffend den Fachbereich Recht ermittelt. Ein kurzer Ausblick auf Unterschiede zwischen der angloamerikanischen Rechtsordnung und dem europäischen Civil Law ergänzt die Ausführungen zur DDC. Der zweite (praktische) Teil beinhaltet die Konkordanztabelle in Form einer Microsoft Excel Tabelle mit einem ausführlichen Kommentar. Diese Tabelle liegt auch in einer verkürzten Form vor, die für die praktische Umsetzung in der Verbunddatenbank vorgesehen ist.
  17. Kleineberg, M.: ¬Die elementaren Formen der Klassifikation : ein strukturgenetischer Beitrag zur Informationsgeschichte (2012) 0.05
    0.054194473 = product of:
      0.18064824 = sum of:
        0.026794761 = weight(_text_:und in 208) [ClassicSimilarity], result of:
          0.026794761 = score(doc=208,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.41987535 = fieldWeight in 208, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=208)
        0.081931174 = weight(_text_:formen in 208) [ClassicSimilarity], result of:
          0.081931174 = score(doc=208,freq=2.0), product of:
            0.17464934 = queryWeight, product of:
              6.0656753 = idf(docFreq=278, maxDocs=44218)
              0.02879306 = queryNorm
            0.46911815 = fieldWeight in 208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0656753 = idf(docFreq=278, maxDocs=44218)
              0.0546875 = fieldNorm(doc=208)
        0.022222685 = weight(_text_:der in 208) [ClassicSimilarity], result of:
          0.022222685 = score(doc=208,freq=8.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.34551817 = fieldWeight in 208, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=208)
        0.026794761 = weight(_text_:und in 208) [ClassicSimilarity], result of:
          0.026794761 = score(doc=208,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.41987535 = fieldWeight in 208, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=208)
        0.017077852 = weight(_text_:des in 208) [ClassicSimilarity], result of:
          0.017077852 = score(doc=208,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.2141777 = fieldWeight in 208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=208)
        0.005827016 = weight(_text_:in in 208) [ClassicSimilarity], result of:
          0.005827016 = score(doc=208,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.14877784 = fieldWeight in 208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=208)
      0.3 = coord(6/20)
    
    Abstract
    Die Kontextabhängigkeit von Klassifikationssystemen wird in kognitive, soziale, kulturelle und historische Aspekte differenziert und ein anthropologisches Grundverständnis innerhalb der Bibliotheks- und Informationswissenschaft nahegelegt. Die Ausgangsfrage von Emile Durkheim und Marcel Mauss nach einem entwicklungslogischen Zusammenhang historischer Ordnungsformen wird wieder aufgenommen und in Auseinandersetzung mit kulturrelativistischen Standpunkten ein nachklassischer Ansatz zur Strukturgenese des klassifikatorischen Denkens vorgestellt. Als methodologischer Beitrag zur Informationsgeschichte wird aufgezeigt, von welchem Bezugspunkt kulturvergleichende Forschungen zur Wissensorganisation ausgehen können.
    Imprint
    Berlin : Institut für Bibliotheks- und Informationswissenschaft der Humboldt-Universität zu Berlin
    Theme
    Geschichte der Klassifikationssysteme
  18. Maas, J.F.: SWD-Explorer : Design und Implementation eines Software-Tools zur erweiterten Suche und grafischen Navigation in der Schlagwortnormdatei (2010) 0.05
    0.04873369 = product of:
      0.13923912 = sum of:
        0.035235714 = weight(_text_:medien in 4035) [ClassicSimilarity], result of:
          0.035235714 = score(doc=4035,freq=2.0), product of:
            0.1355183 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.02879306 = queryNorm
            0.26000705 = fieldWeight in 4035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
        0.019139115 = weight(_text_:und in 4035) [ClassicSimilarity], result of:
          0.019139115 = score(doc=4035,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.29991096 = fieldWeight in 4035, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
        0.031746693 = weight(_text_:der in 4035) [ClassicSimilarity], result of:
          0.031746693 = score(doc=4035,freq=32.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.4935974 = fieldWeight in 4035, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
        0.019139115 = weight(_text_:und in 4035) [ClassicSimilarity], result of:
          0.019139115 = score(doc=4035,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.29991096 = fieldWeight in 4035, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
        0.012198467 = weight(_text_:des in 4035) [ClassicSimilarity], result of:
          0.012198467 = score(doc=4035,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.15298408 = fieldWeight in 4035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
        0.004162154 = weight(_text_:in in 4035) [ClassicSimilarity], result of:
          0.004162154 = score(doc=4035,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.10626988 = fieldWeight in 4035, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
        0.017617857 = product of:
          0.035235714 = sum of:
            0.035235714 = weight(_text_:medien in 4035) [ClassicSimilarity], result of:
              0.035235714 = score(doc=4035,freq=2.0), product of:
                0.1355183 = queryWeight, product of:
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.02879306 = queryNorm
                0.26000705 = fieldWeight in 4035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7066307 = idf(docFreq=1085, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4035)
          0.5 = coord(1/2)
      0.35 = coord(7/20)
    
    Abstract
    Die Schlagwortnormdatei (SWD) stellt als kooperativ erstelltes, kontrolliertes Vokabular ein aus dem deutschsprachigen Raum nicht mehr wegzudenkendes Mittel zur Verschlagwortung von Medien dar. Die SWD dient primär der Vereinheitlichung der Verschlagwortung. Darüber hinaus sind in der Struktur der SWD Relationen zwischen Schlagwörtern definiert, die eine gut vorbereitete Suche stark erleichtern können. Beispiel für solche Relationen sind die Unterbegriff-/Oberbegriffrelationen (Hyponym/Hyperonym) oder die Relation der Ähnlichkeit von Begriffen. Diese Arbeit unternimmt den Versuch, durch die Erstellung eines Such- und Visualisierungstools den Umgang mit der SWD zu erleichtern. Im Fokus der Arbeit steht dabei zum einen die Aufgabe des Fachreferenten, ein Medium geeignet zu verschlagworten. Diese Aufgabe soll durch die Optimierung der technischen Suchmöglichkeiten mit Hilfe von Schlagwörtern geschehen, z.B. durch die Suche mit Hilfe Regulärer Ausdrücke oder durch die Suche entlang der hierarchischen Relationen. Zum anderen sind die beschriebenen Relationen innerhalb der SWD oft unsauber spezifiziert, was ein negativer Seiteneffekt der interdisziplinären und kooperativen Erstellung der SWD ist. Es wird gezeigt, dass durch geeignete Visualisierung viele Fehler schnell auffindbar und korrigierbar sind, was die Aufgabe der Datenpflege um ein Vielfaches vereinfacht. Diese Veröffentlichung geht zurück auf eine Master-Arbeit im postgradualen Fernstudiengang Master of Arts (Library and Information Science) an der Humboldt-Universität zu Berlin.
    Imprint
    Berlin : Institut für Bibliotheks- und Informationswissenschaft der Humboldt-Universität zu Berlin
  19. Rohner, M.: Betrachtung der Data Visualization Literacy in der angestrebten Schweizer Informationsgesellschaft (2018) 0.05
    0.046004195 = product of:
      0.15334731 = sum of:
        0.022966936 = weight(_text_:und in 4585) [ClassicSimilarity], result of:
          0.022966936 = score(doc=4585,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.35989314 = fieldWeight in 4585, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4585)
        0.026937963 = weight(_text_:der in 4585) [ClassicSimilarity], result of:
          0.026937963 = score(doc=4585,freq=16.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.4188313 = fieldWeight in 4585, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=4585)
        0.050429977 = weight(_text_:kommunikation in 4585) [ClassicSimilarity], result of:
          0.050429977 = score(doc=4585,freq=2.0), product of:
            0.14799947 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02879306 = queryNorm
            0.34074432 = fieldWeight in 4585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.046875 = fieldNorm(doc=4585)
        0.022966936 = weight(_text_:und in 4585) [ClassicSimilarity], result of:
          0.022966936 = score(doc=4585,freq=12.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.35989314 = fieldWeight in 4585, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4585)
        0.020701483 = weight(_text_:des in 4585) [ClassicSimilarity], result of:
          0.020701483 = score(doc=4585,freq=4.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.25962257 = fieldWeight in 4585, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=4585)
        0.009344013 = weight(_text_:in in 4585) [ClassicSimilarity], result of:
          0.009344013 = score(doc=4585,freq=14.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.23857531 = fieldWeight in 4585, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4585)
      0.3 = coord(6/20)
    
    Abstract
    Datenvisualisierungen sind ein wichtiges Werkzeug, um Inhalte und Muster in Datensätzen zu erkennen und ermöglichen so auch Laien den Zugang zu der Information, die in Datensätzen steckt. Data Visualization Literacy ist die Kompetenz, Datenvisualisierungen zu lesen, zu verstehen, zu hinterfragen und herzustellen. Data Visulaization Literacy ist daher eine wichtige Kompetenz der Informationsgesellschaft. Im Auftrag des Bundesrates hat das Bundesamt für Kommunikation BAKOM die Strategie "Digitale Schweiz" entwickelt. Die Strategie zeigt auf, wie die fortschreitende Digitalisierung genutzt und die Schweiz zu einer Informationsgesellschaft entwickelt werden soll. In der vorliegenden Arbeit wird untersucht, inwiefern die Strategie "Digitale Schweiz" die Förderung von Data Visualization Literacy in der Bevölkerung unterstützt. Dazu werden die Kompetenzen der Data Visualization Literacy ermittelt, Kompetenzstellen innerhalb des Bildungssystems benannt und die Massnahmen der Strategie in Bezug auf Data Visualization Literacy überprüft.
    Content
    Diese Publikation entstand im Rahmen einer Thesis zum Master of Science FHO in Business Administration, Major Information and Data Management.
    Imprint
    Chur : Hochschule für Technik und Wirtschaft / Arbeitsbereich Informationswissenschaft
  20. Renker, L.: Exploration von Textkorpora : Topic Models als Grundlage der Interaktion (2015) 0.05
    0.045804754 = product of:
      0.15268251 = sum of:
        0.020672604 = weight(_text_:und in 2380) [ClassicSimilarity], result of:
          0.020672604 = score(doc=2380,freq=14.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.32394084 = fieldWeight in 2380, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
        0.017746942 = weight(_text_:der in 2380) [ClassicSimilarity], result of:
          0.017746942 = score(doc=2380,freq=10.0), product of:
            0.06431698 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02879306 = queryNorm
            0.27592933 = fieldWeight in 2380, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
        0.059432298 = weight(_text_:kommunikation in 2380) [ClassicSimilarity], result of:
          0.059432298 = score(doc=2380,freq=4.0), product of:
            0.14799947 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02879306 = queryNorm
            0.40157104 = fieldWeight in 2380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
        0.020672604 = weight(_text_:und in 2380) [ClassicSimilarity], result of:
          0.020672604 = score(doc=2380,freq=14.0), product of:
            0.06381599 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02879306 = queryNorm
            0.32394084 = fieldWeight in 2380, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
        0.024396934 = weight(_text_:des in 2380) [ClassicSimilarity], result of:
          0.024396934 = score(doc=2380,freq=8.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.30596817 = fieldWeight in 2380, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
        0.009761117 = weight(_text_:in in 2380) [ClassicSimilarity], result of:
          0.009761117 = score(doc=2380,freq=22.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.24922498 = fieldWeight in 2380, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
      0.3 = coord(6/20)
    
    Abstract
    Das Internet birgt schier endlose Informationen. Ein zentrales Problem besteht heutzutage darin diese auch zugänglich zu machen. Es ist ein fundamentales Domänenwissen erforderlich, um in einer Volltextsuche die korrekten Suchanfragen zu formulieren. Das ist jedoch oftmals nicht vorhanden, so dass viel Zeit aufgewandt werden muss, um einen Überblick des behandelten Themas zu erhalten. In solchen Situationen findet sich ein Nutzer in einem explorativen Suchvorgang, in dem er sich schrittweise an ein Thema heranarbeiten muss. Für die Organisation von Daten werden mittlerweile ganz selbstverständlich Verfahren des Machine Learnings verwendet. In den meisten Fällen bleiben sie allerdings für den Anwender unsichtbar. Die interaktive Verwendung in explorativen Suchprozessen könnte die menschliche Urteilskraft enger mit der maschinellen Verarbeitung großer Datenmengen verbinden. Topic Models sind ebensolche Verfahren. Sie finden in einem Textkorpus verborgene Themen, die sich relativ gut von Menschen interpretieren lassen und sind daher vielversprechend für die Anwendung in explorativen Suchprozessen. Nutzer können damit beim Verstehen unbekannter Quellen unterstützt werden. Bei der Betrachtung entsprechender Forschungsarbeiten fiel auf, dass Topic Models vorwiegend zur Erzeugung statischer Visualisierungen verwendet werden. Das Sensemaking ist ein wesentlicher Bestandteil der explorativen Suche und wird dennoch nur in sehr geringem Umfang genutzt, um algorithmische Neuerungen zu begründen und in einen umfassenden Kontext zu setzen. Daraus leitet sich die Vermutung ab, dass die Verwendung von Modellen des Sensemakings und die nutzerzentrierte Konzeption von explorativen Suchen, neue Funktionen für die Interaktion mit Topic Models hervorbringen und einen Kontext für entsprechende Forschungsarbeiten bieten können.
    Footnote
    Masterthesis zur Erlangung des akademischen Grades Master of Science (M.Sc.) vorgelegt an der Fachhochschule Köln / Fakultät für Informatik und Ingenieurswissenschaften im Studiengang Medieninformatik.
    Imprint
    Gummersbach : Fakultät für Informatik und Ingenieurswissenschaften
    RSWK
    Mensch-Maschine-Kommunikation
    Subject
    Mensch-Maschine-Kommunikation
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval

Languages

  • d 69
  • e 21
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types