Search (40173 results, page 2 of 2009)

  1. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.11
    0.11146114 = product of:
      0.41797924 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.07116579 = score(doc=5895,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.07116579 = score(doc=5895,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.07116579 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.07116579 = score(doc=5895,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.07116579 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.07116579 = score(doc=5895,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.07116579 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.07116579 = score(doc=5895,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.07116579 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.07116579 = score(doc=5895,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.03798955 = weight(_text_:medien in 5895) [ClassicSimilarity], result of:
          0.03798955 = score(doc=5895,freq=6.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.45034546 = fieldWeight in 5895, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        4.3878925E-4 = product of:
          0.0013163678 = sum of:
            0.0013163678 = weight(_text_:a in 5895) [ClassicSimilarity], result of:
              0.0013163678 = score(doc=5895,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.06369744 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
      0.26666668 = coord(8/30)
    
    Abstract
    Das Problem der Wahrnehmung und Darstellung von Wahrheit durch die Medien führt zu vier zentralen Fragen: Wie viel Wahrheit gibt es in der Welt, über die Journalisten berichten müssen? Wie ermittelt oder recherchiert man diese Wahrheit? Wie trennt man die Spreu vom Weizen? Und wie geht man als Journalist mit dem um, was man als Wahrheit erkannt hat oder erkannt zu haben glaubt? Hier gibt es ganz offensichtlich eine Parallele zwischen Journalisten und Wissenschaftlern. Journalisten und Wissenschaftler brauchen erstens Hypothesen, zweitens geeignete Hypothesentests, drittens ein gutes Abgrenzungs-Kriterium und viertens Verfahren, um die erkannten Sachverhalte auf angemessene Weise für eine Kommunikation mit anderen zu repräsentieren, das heißt sie darzustellen. Es gibt zwei große Unterschiede zwischen Journalisten und Wissenschaftlern: Journalisten sind in der Regel auf raum-zeitlich begrenzte Aussagen aus, Wissenschaftler in der Regel auf raumzeitlich unbegrenzte Gesetze. Aber diese Unterschiede sind fließend, weil Wissenschaftler raum-zeitlich begrenzte Aussagen brauchen, um ihre All-Aussagen zu überprüfen, und Journalisten sich immer häufiger auf das Feld der allgemeinen Gesetzes-Aussagen wagen oder doch zumindest Kausalinterpretationen für soziale Phänomene anbieten. Der zweite Unterschied besteht darin, dass die Wissenschaft weitgehend professionalisiert ist (zumindest gilt dies uneingeschränkt für die Naturwissenschaften und die Medizin), was ihr relativ klare Abgrenzungs- und Güte-Kriterien beschert hat. Diese fehlen weitgehend im Journalismus.
    Content
    Der Beitrag basiert auf einem Vortrag beim 9. Ethiktag "Wissenschaft und Medien" am Zentrum für Ethik und Recht in der Medizin des Universitätsklinikums Freiburg im Februar 2001.
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
    Type
    a
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.1090321 = product of:
      0.46728045 = sum of:
        0.028466314 = product of:
          0.08539894 = sum of:
            0.08539894 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.08539894 = score(doc=562,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.08539894 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.08539894 = score(doc=562,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01181941 = product of:
          0.017729115 = sum of:
            0.003159283 = weight(_text_:a in 562) [ClassicSimilarity], result of:
              0.003159283 = score(doc=562,freq=8.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 562, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
            0.014569832 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.014569832 = score(doc=562,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.6666667 = coord(2/3)
      0.23333333 = coord(7/30)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.11
    0.10671722 = product of:
      0.45735952 = sum of:
        0.028466314 = product of:
          0.08539894 = sum of:
            0.08539894 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.08539894 = score(doc=400,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.08539894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.08539894 = score(doc=400,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.08539894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.08539894 = score(doc=400,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.08539894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.08539894 = score(doc=400,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.08539894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.08539894 = score(doc=400,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.08539894 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.08539894 = score(doc=400,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.0018984926 = product of:
          0.0056954776 = sum of:
            0.0056954776 = weight(_text_:a in 400) [ClassicSimilarity], result of:
              0.0056954776 = score(doc=400,freq=26.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.27559727 = fieldWeight in 400, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.106548965 = product of:
      0.45663843 = sum of:
        0.028466314 = product of:
          0.08539894 = sum of:
            0.08539894 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.08539894 = score(doc=862,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08539894 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.08539894 = score(doc=862,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.0011773953 = product of:
          0.003532186 = sum of:
            0.003532186 = weight(_text_:a in 862) [ClassicSimilarity], result of:
              0.003532186 = score(doc=862,freq=10.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.1709182 = fieldWeight in 862, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  5. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.10
    0.10271613 = product of:
      0.38518548 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.07116579 = score(doc=855,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.07116579 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.07116579 = score(doc=855,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.0045597646 = product of:
          0.009119529 = sum of:
            0.009119529 = weight(_text_:online in 855) [ClassicSimilarity], result of:
              0.009119529 = score(doc=855,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16765618 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.5 = coord(1/2)
        0.0010748099 = product of:
          0.0032244297 = sum of:
            0.0032244297 = weight(_text_:a in 855) [ClassicSimilarity], result of:
              0.0032244297 = score(doc=855,freq=12.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15602624 = fieldWeight in 855, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
      0.26666668 = coord(8/30)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  6. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.10
    0.09862116 = product of:
      0.4226621 = sum of:
        0.018977545 = product of:
          0.056932632 = sum of:
            0.056932632 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.056932632 = score(doc=5820,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0805149 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.0805149 = score(doc=5820,freq=4.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0011100589 = product of:
          0.0033301765 = sum of:
            0.0033301765 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
              0.0033301765 = score(doc=5820,freq=20.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16114321 = fieldWeight in 5820, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.09611896 = product of:
      0.36044607 = sum of:
        0.018977545 = product of:
          0.056932632 = sum of:
            0.056932632 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.056932632 = score(doc=701,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.055316057 = weight(_text_:wirtschaftswissenschaften in 701) [ClassicSimilarity], result of:
          0.055316057 = score(doc=701,freq=6.0), product of:
            0.11380646 = queryWeight, product of:
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.017922899 = queryNorm
            0.48605376 = fieldWeight in 701, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.056932632 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.056932632 = score(doc=701,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0014893002 = product of:
          0.0044679004 = sum of:
            0.0044679004 = weight(_text_:a in 701) [ClassicSimilarity], result of:
              0.0044679004 = score(doc=701,freq=36.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.2161963 = fieldWeight in 701, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.26666668 = coord(8/30)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Footnote
    Zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften (Dr. rer. pol.) von der Fakultaet fuer Wirtschaftswissenschaften der Universitaet Fridericiana zu Karlsruhe genehmigte Dissertation.
    Imprint
    Karlsruhe : Fakultaet fuer Wirtschaftswissenschaften der Universitaet Fridericiana zu Karlsruhe
  8. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.09
    0.08962582 = product of:
      0.38411066 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.07116579 = score(doc=1000,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07116579 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.07116579 = score(doc=1000,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.0045597646 = product of:
          0.009119529 = sum of:
            0.009119529 = weight(_text_:online in 1000) [ClassicSimilarity], result of:
              0.009119529 = score(doc=1000,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16765618 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.23333333 = coord(7/30)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  9. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.09
    0.08891655 = product of:
      0.3810709 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.07116579 = score(doc=4997,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.07116579 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4997,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.0015200109 = product of:
          0.0045600324 = sum of:
            0.0045600324 = weight(_text_:a in 4997) [ClassicSimilarity], result of:
              0.0045600324 = score(doc=4997,freq=24.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.22065444 = fieldWeight in 4997, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  10. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.09
    0.08886903 = product of:
      0.38086727 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.07116579 = score(doc=76,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.07116579 = score(doc=76,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.07116579 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.07116579 = score(doc=76,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.07116579 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.07116579 = score(doc=76,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.07116579 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.07116579 = score(doc=76,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.07116579 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.07116579 = score(doc=76,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.0013163679 = product of:
          0.0039491034 = sum of:
            0.0039491034 = weight(_text_:a in 76) [ClassicSimilarity], result of:
              0.0039491034 = score(doc=76,freq=18.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.19109234 = fieldWeight in 76, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
      0.23333333 = coord(7/30)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
    Type
    a
  11. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.08770639 = product of:
      0.43853194 = sum of:
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.08539894 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.08539894 = score(doc=563,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.011537234 = product of:
          0.017305851 = sum of:
            0.0027360192 = weight(_text_:a in 563) [ClassicSimilarity], result of:
              0.0027360192 = score(doc=563,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.13239266 = fieldWeight in 563, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
            0.014569832 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.014569832 = score(doc=563,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.6666667 = coord(2/3)
      0.2 = coord(6/30)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  12. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.08
    0.07591018 = product of:
      0.3795509 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.07116579 = score(doc=4388,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.07116579 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.07116579 = score(doc=4388,freq=2.0), product of:
            0.15195054 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.017922899 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.2 = coord(6/30)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  13. Neunzert, H.: Mathematische Modellierung : ein "curriculum vitae" (2012) 0.06
    0.056564875 = product of:
      0.56564873 = sum of:
        0.15976287 = product of:
          0.31952575 = sum of:
            0.31952575 = weight(_text_:c3 in 2255) [ClassicSimilarity], result of:
              0.31952575 = score(doc=2255,freq=4.0), product of:
                0.17476578 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.017922899 = queryNorm
                1.8283083 = fieldWeight in 2255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2255)
          0.5 = coord(1/2)
        0.40483278 = product of:
          0.60724914 = sum of:
            0.31952575 = weight(_text_:c3 in 2255) [ClassicSimilarity], result of:
              0.31952575 = score(doc=2255,freq=4.0), product of:
                0.17476578 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.017922899 = queryNorm
                1.8283083 = fieldWeight in 2255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2255)
            0.28772342 = weight(_text_:a4t in 2255) [ClassicSimilarity], result of:
              0.28772342 = score(doc=2255,freq=2.0), product of:
                0.19721892 = queryWeight, product of:
                  11.00374 = idf(docFreq=1, maxDocs=44218)
                  0.017922899 = queryNorm
                1.4589037 = fieldWeight in 2255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  11.00374 = idf(docFreq=1, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2255)
          0.6666667 = coord(2/3)
        0.0010530944 = product of:
          0.003159283 = sum of:
            0.003159283 = weight(_text_:a in 2255) [ClassicSimilarity], result of:
              0.003159283 = score(doc=2255,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 2255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2255)
          0.33333334 = coord(1/3)
      0.1 = coord(3/30)
    
    Content
    Vortrag auf der Tagung "Geschichte und Modellierung", Jena, 3. Februar 2012. Vgl. unter: http://www.fmi.uni-jena.de/Fakult%C3%A4t/Institute+und+Abteilungen/Abteilung+f%C3%BCr+Didaktik/Kolloquien.html?highlight=neunzert.
    Type
    a
  14. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.05
    0.053139266 = product of:
      0.26569632 = sum of:
        0.03429938 = weight(_text_:einzelne in 193) [ClassicSimilarity], result of:
          0.03429938 = score(doc=193,freq=2.0), product of:
            0.10548963 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.017922899 = queryNorm
            0.3251446 = fieldWeight in 193, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0390625 = fieldNorm(doc=193)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
              0.11121251 = score(doc=193,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
              0.11121251 = score(doc=193,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
              0.11121251 = score(doc=193,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
              0.11121251 = score(doc=193,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
        0.0089719305 = product of:
          0.013457895 = sum of:
            0.0013163678 = weight(_text_:a in 193) [ClassicSimilarity], result of:
              0.0013163678 = score(doc=193,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.06369744 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
            0.012141528 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
              0.012141528 = score(doc=193,freq=2.0), product of:
                0.06276294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.017922899 = queryNorm
                0.19345059 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.6666667 = coord(2/3)
      0.2 = coord(6/30)
    
    Abstract
    Die Verwendung von Erschließungsinstrumenten zur Beschreibung von Informationsressourcen kann Effizienz und Effektivität thematischer Recherchen wesentlich steigern: standardisierte Begriffe unterstützen Recall und Precision begrifflicher Suchen; Ausweisung von Relationen bietet die Grundlage für explorative Suchprozesse. Eine zusätzliche Steigerung der Funktionalitäten des Retrievals kann mittels einer Ausdifferenzierung und Spezifizierung der in Normdaten enthaltenen semantischen Informationen erreicht werden, die über die in Thesauri und Klassifikationen verbreiteten grundlegenden Relationstypen (äquivalent, hierarchisch, assoziativ) hinausgehen. In modernen Informationsräumen, in denen Daten verschiedener Institutionen über eine Plattform zeit- und ortsunabhängig zugänglich gemacht werden, können einzelne Wissenssysteme indes nur unzureichend das Information Retrieval unterstützen. Zu unterschiedlich sind die für thematische Suchanfragen relevanten Indexierungsdaten. Eine Verbesserung kann mittels Herstellung von Interoperabilität zwischen den verschiedenen Dokumentationssprachen erreicht werden. Im Vortrag wird dargelegt, in welcher Art und Weise die in Wissenssystemen enthaltenen semantischen Informationen zur Unterstützung thematischer Recherchen optimiert werden können und inwiefern Interoperabilität zwischen Systemen geschaffen werden kann, die gleichwertige Funktionalitäten in heterogenen Informationsräumen gewährleisten. In diesem Zusammenhang wird auch auf aktuelle Mappingprojekte wie das DFG-Projekt CrissCross oder das RESEDA-Projekt, welches sich mit den Möglichkeiten der semantischen Anreicherung bestehender Dokumentationssprachen befasst, eingegangen.
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
    Type
    a
  15. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.05
    0.04938139 = product of:
      0.24690695 = sum of:
        0.023721932 = product of:
          0.07116579 = sum of:
            0.07116579 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.07116579 = score(doc=692,freq=2.0), product of:
                0.15195054 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.017922899 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.33333334 = coord(1/3)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.11121251 = score(doc=692,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.11121251 = score(doc=692,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.11121251 = score(doc=692,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
        0.055606253 = product of:
          0.11121251 = sum of:
            0.11121251 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.11121251 = score(doc=692,freq=2.0), product of:
                0.1899518 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
        7.6000544E-4 = product of:
          0.0022800162 = sum of:
            0.0022800162 = weight(_text_:a in 692) [ClassicSimilarity], result of:
              0.0022800162 = score(doc=692,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.11032722 = fieldWeight in 692, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.33333334 = coord(1/3)
      0.2 = coord(6/30)
    
    Abstract
    What is the difference between Piaget's constructivism and Papert's "constructionism"? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget's constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children's ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of-or hold onto- their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they're wrong. Papert's constructionism, in contrast, focuses more on the art of learning, or 'learning to learn', and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people's] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world.
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
    Type
    a
  16. Lintörfer, U.: Beim Anklicken der Werbefläche öffnet sich eine neue Seite : Werbung und Kleinanzeigen im Internet (1996) 0.04
    0.03800072 = product of:
      0.22800432 = sum of:
        0.06679867 = weight(_text_:post in 3972) [ClassicSimilarity], result of:
          0.06679867 = score(doc=3972,freq=2.0), product of:
            0.10409636 = queryWeight, product of:
              5.808009 = idf(docFreq=360, maxDocs=44218)
              0.017922899 = queryNorm
            0.6417004 = fieldWeight in 3972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.808009 = idf(docFreq=360, maxDocs=44218)
              0.078125 = fieldNorm(doc=3972)
        0.03287024 = weight(_text_:neue in 3972) [ClassicSimilarity], result of:
          0.03287024 = score(doc=3972,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.4501423 = fieldWeight in 3972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.078125 = fieldNorm(doc=3972)
        0.10622595 = sum of:
          0.036478117 = weight(_text_:online in 3972) [ClassicSimilarity], result of:
            0.036478117 = score(doc=3972,freq=8.0), product of:
              0.05439423 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.017922899 = queryNorm
              0.67062473 = fieldWeight in 3972, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.078125 = fieldNorm(doc=3972)
          0.069747835 = weight(_text_:dienste in 3972) [ClassicSimilarity], result of:
            0.069747835 = score(doc=3972,freq=2.0), product of:
              0.106369466 = queryWeight, product of:
                5.934836 = idf(docFreq=317, maxDocs=44218)
                0.017922899 = queryNorm
              0.65571296 = fieldWeight in 3972, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.934836 = idf(docFreq=317, maxDocs=44218)
                0.078125 = fieldNorm(doc=3972)
        0.021231882 = weight(_text_:u in 3972) [ClassicSimilarity], result of:
          0.021231882 = score(doc=3972,freq=2.0), product of:
            0.058687534 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.017922899 = queryNorm
            0.3617784 = fieldWeight in 3972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=3972)
        8.775785E-4 = product of:
          0.0026327355 = sum of:
            0.0026327355 = weight(_text_:a in 3972) [ClassicSimilarity], result of:
              0.0026327355 = score(doc=3972,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.12739488 = fieldWeight in 3972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3972)
          0.33333334 = coord(1/3)
      0.16666667 = coord(5/30)
    
    Abstract
    Wie Zeitungen und Zeitschriften nach Ansätzen für Online-Dienste suchen (Beispiele. Rheinische Post; tv today online; Stern online; Focus; Spiegel; Hamburger Morgenpost online)
    Type
    a
  17. Siegenheim, V.; Kaumanns, R.: ¬Die Google-Ökonomie : Wie Google die Wirtschaft verändert (2007) 0.04
    0.03628476 = product of:
      0.21770856 = sum of:
        0.096361436 = weight(_text_:760 in 2414) [ClassicSimilarity], result of:
          0.096361436 = score(doc=2414,freq=4.0), product of:
            0.1486828 = queryWeight, product of:
              8.29569 = idf(docFreq=29, maxDocs=44218)
              0.017922899 = queryNorm
            0.64810073 = fieldWeight in 2414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.29569 = idf(docFreq=29, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.05645671 = weight(_text_:wirtschaftswissenschaften in 2414) [ClassicSimilarity], result of:
          0.05645671 = score(doc=2414,freq=4.0), product of:
            0.11380646 = queryWeight, product of:
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.017922899 = queryNorm
            0.49607652 = fieldWeight in 2414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.3497796 = idf(docFreq=209, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.01643512 = weight(_text_:neue in 2414) [ClassicSimilarity], result of:
          0.01643512 = score(doc=2414,freq=2.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.22507115 = fieldWeight in 2414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.031018337 = weight(_text_:medien in 2414) [ClassicSimilarity], result of:
          0.031018337 = score(doc=2414,freq=4.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.36770552 = fieldWeight in 2414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2414)
        0.017436959 = product of:
          0.034873918 = sum of:
            0.034873918 = weight(_text_:dienste in 2414) [ClassicSimilarity], result of:
              0.034873918 = score(doc=2414,freq=2.0), product of:
                0.106369466 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.017922899 = queryNorm
                0.32785648 = fieldWeight in 2414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2414)
          0.5 = coord(1/2)
      0.16666667 = coord(5/30)
    
    Abstract
    Kaum ein Unternehmen hat in den letzten Jahren eine so rasante Erfolgsgeschichte geschrieben wie Google. Vor rund zehn Jahren als Internet-Suchmaschine mit einer Handvoll Mitarbeiter gestartet, hat sich Google zu einem erfolgreichen Konzern mit über 10.000 Mitarbeitern und Milliarden-Umsätzen entwickelt. Google hat sich dabei fundamental verändert. Google ist mittlerweile mehr als eine Suchmaschine. Google ist ein Nachrichtendienst. Google ist ein Multimedia-Archiv. Google ist ein Bezahlsystem. Google ist ein geographisches Informationssystem. Google ist ein Werbevermarkter. Google ist ein Software-Anbieter. Google ist vielleicht bald auch eine Telefongesellschaft ... und vieles mehr! Google investiert Milliarden in den Ausbau seiner Suchtechnologie und die Expansion seines Werbenetzwerks in die klassischen Werbeträger (TV, Radio oder Print). Google arbeitet an vollkommen neuen Angeboten wie der Verfügbarkeit von Wissen in Form der Digitalisierung gigantischer Bibliotheken. Google stößt in neue Segmente wie den mobile Plattformen, Geoinformation oder Anwendungen zur Bürokommunikation vor und engagiert sich beim Aufbau neuer Kommunikationsnetze oder entwickelt Dienste, die unser (digitales) Leben nachhaltig verändern könnten. Wie passt dies alles zusammen? Betrachtet man die einzelnen Aktivitäten von Google näher, ergibt sich ein spannendes Gesamtbild mit der Frage, welche Ziele und Strategie Google verfolgt. Dieses Gesamtbild - unter dem Schlagwort Google-Ökonomie - wird analysiert und die möglichen Auswirkungen auf verschiedene Branchen bewertet.
    Classification
    AP 18420 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Arten des Nachrichtenwesens, Medientechnik / Internet
    QP 380 Wirtschaftswissenschaften / Allgemeine Betriebswirtschaftslehre / Unternehmensführung / Unternehmertum. Unternehmerbiographien
    SR 760
    RVK
    AP 18420 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Arten des Nachrichtenwesens, Medientechnik / Internet
    QP 380 Wirtschaftswissenschaften / Allgemeine Betriebswirtschaftslehre / Unternehmensführung / Unternehmertum. Unternehmerbiographien
    SR 760
  18. Sieglerschmidt, J.: Wissensordnungen im analogen und im digitalen Zeitalter (2017) 0.03
    0.034538776 = product of:
      0.2590408 = sum of:
        0.15062587 = product of:
          0.30125174 = sum of:
            0.30125174 = weight(_text_:c3 in 4026) [ClassicSimilarity], result of:
              0.30125174 = score(doc=4026,freq=8.0), product of:
                0.17476578 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.017922899 = queryNorm
                1.7237456 = fieldWeight in 4026, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4026)
          0.5 = coord(1/2)
        0.10041725 = product of:
          0.30125174 = sum of:
            0.30125174 = weight(_text_:c3 in 4026) [ClassicSimilarity], result of:
              0.30125174 = score(doc=4026,freq=8.0), product of:
                0.17476578 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.017922899 = queryNorm
                1.7237456 = fieldWeight in 4026, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4026)
          0.33333334 = coord(1/3)
        0.0072956234 = product of:
          0.014591247 = sum of:
            0.014591247 = weight(_text_:online in 4026) [ClassicSimilarity], result of:
              0.014591247 = score(doc=4026,freq=2.0), product of:
                0.05439423 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.017922899 = queryNorm
                0.2682499 = fieldWeight in 4026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4026)
          0.5 = coord(1/2)
        7.0206285E-4 = product of:
          0.0021061886 = sum of:
            0.0021061886 = weight(_text_:a in 4026) [ClassicSimilarity], result of:
              0.0021061886 = score(doc=4026,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.10191591 = fieldWeight in 4026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4026)
          0.33333334 = coord(1/3)
      0.13333334 = coord(4/30)
    
    Content
    Vgl. unter: https://books.google.de/books?hl=de&lr=&id=0rtGDwAAQBAJ&oi=fnd&pg=PA35&dq=inhaltserschlie%C3%9Fung+OR+sacherschlie%C3%9Fung&ots=5u0TwCbFqE&sig=GGw3Coc21CINkone-6Lx8LaSAjY#v=onepage&q=inhaltserschlie%C3%9Fung%20OR%20sacherschlie%C3%9Fung&f=false.
    Footnote
    Wierabdruck aus: Handbuch Kulturportale: Online-Angebote aus Kultur und Wissenschaft. Hrsg.: Ellen Euler u.a. Berlin 2015.
    Type
    a
  19. Mandl, H.; Reinmann-Rothmeier, G.: Lernen mit neuen Medien (2001) 0.03
    0.0305068 = product of:
      0.1830408 = sum of:
        0.048019137 = weight(_text_:einzelne in 6741) [ClassicSimilarity], result of:
          0.048019137 = score(doc=6741,freq=2.0), product of:
            0.10548963 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.017922899 = queryNorm
            0.45520246 = fieldWeight in 6741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6741)
        0.008576217 = product of:
          0.017152434 = sum of:
            0.017152434 = weight(_text_:29 in 6741) [ClassicSimilarity], result of:
              0.017152434 = score(doc=6741,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.27205724 = fieldWeight in 6741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6741)
          0.5 = coord(1/2)
        0.032539878 = weight(_text_:neue in 6741) [ClassicSimilarity], result of:
          0.032539878 = score(doc=6741,freq=4.0), product of:
            0.07302189 = queryWeight, product of:
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.017922899 = queryNorm
            0.44561815 = fieldWeight in 6741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.074223 = idf(docFreq=2043, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6741)
        0.081241995 = weight(_text_:medien in 6741) [ClassicSimilarity], result of:
          0.081241995 = score(doc=6741,freq=14.0), product of:
            0.084356464 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.017922899 = queryNorm
            0.9630797 = fieldWeight in 6741, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6741)
        0.012663566 = product of:
          0.018995348 = sum of:
            0.001842915 = weight(_text_:a in 6741) [ClassicSimilarity], result of:
              0.001842915 = score(doc=6741,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.089176424 = fieldWeight in 6741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6741)
            0.017152434 = weight(_text_:29 in 6741) [ClassicSimilarity], result of:
              0.017152434 = score(doc=6741,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.27205724 = fieldWeight in 6741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6741)
          0.6666667 = coord(2/3)
      0.16666667 = coord(5/30)
    
    Abstract
    Die enormen Fortschritte auf dem Sektor der neuen Informations- und Kommunikationstechnologien bieten eine hervorragende Grundlage für eine Qualitäts- und Effizienzsteigerung der Aus- und Weiterbildung - allerdings im Sinne einer notwendigen, nicht jedoch hinreichenden Bedingung. Dringend notwendig sind neben technischen Neuerungen daher neue pädagogische und didaktische Konzepte für die Gestaltung multimedialer Lehr-Lernumgebungen, die über einzelne Modeerscheinungen hinaus wirklich Bestand haben
    Content
    Enthält die Abschnitte: Pädagogische Grundlegung - Potenziale neuer Medien für die Aus- und Weiterbildung - Eigenverantwortliches Lernen mit neuen Medien - Problemorientiertes Lernen mit neuen Medien - Kooperatives Lernen mit neuen Medien - Instruktionale Unterstützung mit neuen Medien - Veränderte Rollenverteilung und strukturelle Veränderungen durch neue Medien
    Date
    29. 1.1997 18:49:05
    Type
    a
  20. Wersig, G.: Informationsindustrie, Informationsgewerbe oder Informationsinitiative : was verändert die Informationslandschaft? (1983) 0.03
    0.028714372 = product of:
      0.2871437 = sum of:
        0.24466415 = weight(_text_:informationsgewerbe in 440) [ClassicSimilarity], result of:
          0.24466415 = score(doc=440,freq=2.0), product of:
            0.16837312 = queryWeight, product of:
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.017922899 = queryNorm
            1.4531069 = fieldWeight in 440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.109375 = fieldNorm(doc=440)
        0.017152434 = product of:
          0.03430487 = sum of:
            0.03430487 = weight(_text_:29 in 440) [ClassicSimilarity], result of:
              0.03430487 = score(doc=440,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5441145 = fieldWeight in 440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=440)
          0.5 = coord(1/2)
        0.025327131 = product of:
          0.037990697 = sum of:
            0.00368583 = weight(_text_:a in 440) [ClassicSimilarity], result of:
              0.00368583 = score(doc=440,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.17835285 = fieldWeight in 440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=440)
            0.03430487 = weight(_text_:29 in 440) [ClassicSimilarity], result of:
              0.03430487 = score(doc=440,freq=2.0), product of:
                0.063047156 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.017922899 = queryNorm
                0.5441145 = fieldWeight in 440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=440)
          0.6666667 = coord(2/3)
      0.1 = coord(3/30)
    
    Source
    Deutscher Dokumentartag 1982, Lübeck-Travemünde, 29.-30.9.1982: Fachinformation im Zeitalter der Informationsindustrie. Bearb.: H. Strohl-Goebel
    Type
    a

Authors

Languages

Types

Themes

Subjects

Classifications