Search (349 results, page 1 of 18)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.60
    0.598467 = product of:
      1.595912 = sum of:
        0.0997445 = product of:
          0.2992335 = sum of:
            0.2992335 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.2992335 = score(doc=973,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.2992335 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2992335 = score(doc=973,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2992335 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2992335 = score(doc=973,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2992335 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2992335 = score(doc=973,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2992335 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2992335 = score(doc=973,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2992335 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2992335 = score(doc=973,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.375 = coord(6/16)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.47
    0.46723938 = product of:
      0.83064777 = sum of:
        0.044339646 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.044339646 = score(doc=563,freq=8.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.43268442 = fieldWeight in 563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14961675 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14961675 = score(doc=563,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14961675 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14961675 = score(doc=563,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14961675 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14961675 = score(doc=563,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.006414798 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.006414798 = score(doc=563,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.01904665 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.01904665 = score(doc=563,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14961675 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14961675 = score(doc=563,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14961675 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14961675 = score(doc=563,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.012762985 = product of:
          0.02552597 = sum of:
            0.02552597 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.02552597 = score(doc=563,freq=2.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5625 = coord(9/16)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.39
    0.39318 = product of:
      0.6989867 = sum of:
        0.04156021 = product of:
          0.12468062 = sum of:
            0.12468062 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12468062 = score(doc=1000,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.018474855 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.018474855 = score(doc=1000,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.00798859 = product of:
          0.01597718 = sum of:
            0.01597718 = weight(_text_:online in 1000) [ClassicSimilarity], result of:
              0.01597718 = score(doc=1000,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.16765618 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.007559912 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.007559912 = score(doc=1000,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5625 = coord(9/16)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  4. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.39
    0.38801986 = product of:
      0.7760397 = sum of:
        0.033248167 = product of:
          0.0997445 = sum of:
            0.0997445 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.0997445 = score(doc=5820,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.14106002 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14106002 = score(doc=5820,freq=4.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14106002 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14106002 = score(doc=5820,freq=4.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14106002 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14106002 = score(doc=5820,freq=4.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.012095859 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.012095859 = score(doc=5820,freq=16.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.025395533 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.025395533 = score(doc=5820,freq=8.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14106002 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14106002 = score(doc=5820,freq=4.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14106002 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14106002 = score(doc=5820,freq=4.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(8/16)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  5. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.35
    0.35226133 = product of:
      0.70452267 = sum of:
        0.04156021 = product of:
          0.12468062 = sum of:
            0.12468062 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.12468062 = score(doc=4997,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.031999387 = weight(_text_:web in 4997) [ClassicSimilarity], result of:
          0.031999387 = score(doc=4997,freq=6.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.3122631 = fieldWeight in 4997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12468062 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4997,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12468062 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4997,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12468062 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4997,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.007559912 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
          0.007559912 = score(doc=4997,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13714671 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12468062 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4997,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12468062 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4997,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(8/16)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  6. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.35
    0.35012615 = product of:
      0.7002523 = sum of:
        0.04156021 = product of:
          0.12468062 = sum of:
            0.12468062 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.12468062 = score(doc=855,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.12468062 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12468062 = score(doc=855,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.12468062 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12468062 = score(doc=855,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.00798859 = product of:
          0.01597718 = sum of:
            0.01597718 = weight(_text_:online in 855) [ClassicSimilarity], result of:
              0.01597718 = score(doc=855,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.16765618 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.5 = coord(1/2)
        0.12468062 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12468062 = score(doc=855,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.12468062 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12468062 = score(doc=855,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.027300376 = weight(_text_:software in 855) [ClassicSimilarity], result of:
          0.027300376 = score(doc=855,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.21915624 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.12468062 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12468062 = score(doc=855,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.5 = coord(8/16)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.34
    0.34493226 = product of:
      0.6132129 = sum of:
        0.033248167 = product of:
          0.0997445 = sum of:
            0.0997445 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.0997445 = score(doc=701,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.020901911 = weight(_text_:web in 701) [ClassicSimilarity], result of:
          0.020901911 = score(doc=701,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.2039694 = fieldWeight in 701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0997445 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.0997445 = score(doc=701,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0997445 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.0997445 = score(doc=701,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0997445 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.0997445 = score(doc=701,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.012829595 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.012829595 = score(doc=701,freq=18.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.04751069 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.04751069 = score(doc=701,freq=28.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0997445 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.0997445 = score(doc=701,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0997445 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.0997445 = score(doc=701,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5625 = coord(9/16)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Theme
    Semantic Web
  8. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.30
    0.29900423 = product of:
      0.68343824 = sum of:
        0.04156021 = product of:
          0.12468062 = sum of:
            0.12468062 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.12468062 = score(doc=4388,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.018474855 = weight(_text_:web in 4388) [ClassicSimilarity], result of:
          0.018474855 = score(doc=4388,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12468062 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4388,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12468062 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4388,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12468062 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4388,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12468062 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4388,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12468062 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12468062 = score(doc=4388,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.4375 = coord(7/16)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  9. Toussi, M.: Information Retrieval am Beispiel der Wide Area Information Server (WAIS) und dem World Wide Web (WWW) (1996) 0.06
    0.063046545 = product of:
      0.25218618 = sum of:
        0.13484664 = weight(_text_:wide in 5965) [ClassicSimilarity], result of:
          0.13484664 = score(doc=5965,freq=4.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.9692284 = fieldWeight in 5965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.051729593 = weight(_text_:web in 5965) [ClassicSimilarity], result of:
          0.051729593 = score(doc=5965,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.50479853 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.021167753 = weight(_text_:information in 5965) [ClassicSimilarity], result of:
          0.021167753 = score(doc=5965,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.3840108 = fieldWeight in 5965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.044442184 = weight(_text_:retrieval in 5965) [ClassicSimilarity], result of:
          0.044442184 = score(doc=5965,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.46789268 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
      0.25 = coord(4/16)
    
  10. Ebeid, N.: Kataloganreicherung / user-created content : oder: Wieso funktioniert mein OPAC nicht wie Amazon? (2009) 0.05
    0.04852622 = product of:
      0.19410488 = sum of:
        0.031999387 = weight(_text_:web in 4606) [ClassicSimilarity], result of:
          0.031999387 = score(doc=4606,freq=6.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.3122631 = fieldWeight in 4606, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4606)
        0.14292318 = weight(_text_:2.0 in 4606) [ClassicSimilarity], result of:
          0.14292318 = score(doc=4606,freq=12.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            0.7847994 = fieldWeight in 4606, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4606)
        0.0138366455 = product of:
          0.027673291 = sum of:
            0.027673291 = weight(_text_:online in 4606) [ClassicSimilarity], result of:
              0.027673291 = score(doc=4606,freq=6.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.29038906 = fieldWeight in 4606, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4606)
          0.5 = coord(1/2)
        0.005345665 = weight(_text_:information in 4606) [ClassicSimilarity], result of:
          0.005345665 = score(doc=4606,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.09697737 = fieldWeight in 4606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4606)
      0.25 = coord(4/16)
    
    Abstract
    Das Aufkommen des Web 2.0 im Jahr 2004 signalisierte einen Wechsel in der Internetnutzung. Beim Web 2.0 stehen die Nutzer und deren Informationsbedürfnisse im Mittelpunkt. Dies bedeutet auch für Bibliotheken eine Herausforderung. Der Online-Katalog als zentrale Dienstleistung ist von diesen Änderungen besonders betroffen. Hersteller von Bibliothekssoftware und Bibliotheken sind mit der Tatsache konfrontiert, dass immer mehr Menschen für ihre Informationsanliegen das Internet verwenden und Bibliothekskataloge umgehen. Bibliothekarische Webseiten können oft nicht mit Oberflächen wie denen von Google oder Amazon hinsichtlich Benutzerfreundlichkeit konkurrieren. Um nicht weiter in der Wahrnehmung der Nutzer zurückzufallen müssen Bibliotheken ihre Konzepte und Angebote gründlich analysieren. Das Ziel dieser Arbeit ist es, Schwächen und Probleme von konventionellen Online-Katalogen aufzuzeigen und welche Möglichkeiten es gibt, den OPAC für Nutzer attraktiver zu gestalten. Zuerst werden Informationen zum Thema in einschlägigen Publikationen gesammelt. Im Anschluss erfolgt eine Zusammenfassung und Auswertung der erarbeiteten Literatur. Dann werden einige Hochschulbibliotheken vorgestellt, die in ihren OPACs bereits Web 2.0-Anwendungen anbieten bzw. planen. Außerdem werden qualitative Interviews mit Bibliothekaren durchgeführt, die zuständig sind für einen OPAC 2.0. Mit den in den Interviews getätigten Aussagen soll ein aktueller Stand hinsichtlich OPAC-Entwicklung bezweckt werden. Ein wesentliches Ergebnis dieser Arbeit ist, dass es den OPAC 2.0 nicht gibt, und dass jede Bibliothek spezifische Erfordernisse hat, was einen Online-Katalog anbelangt. Daher wird vorgeschlagen, dass Bibliotheken zunächst sich und ihre Arbeitsumfeld sorgfältig analysieren, insbesondere ihre Zielgruppen. Aufgrund der Tatsache, dass es viele Attribute für einen OPAC 2.0 gibt, sollten Entscheidungsträger in Bibliotheken sorgfältig abwägen, welche Instrumente und Anwendungen erforderlich und sinnvoll sind.
    Imprint
    Eisenstadt : Fachhochschule; Fachbereich Information und Wissensmanagement
  11. Artemenko, O.; Shramko, M.: Entwicklung eines Werkzeugs zur Sprachidentifikation in mono- und multilingualen Texten (2005) 0.04
    0.04334297 = product of:
      0.11558126 = sum of:
        0.023837745 = weight(_text_:wide in 572) [ClassicSimilarity], result of:
          0.023837745 = score(doc=572,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.171337 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.018289173 = weight(_text_:web in 572) [ClassicSimilarity], result of:
          0.018289173 = score(doc=572,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.17847323 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.039491575 = weight(_text_:benutzer in 572) [ClassicSimilarity], result of:
          0.039491575 = score(doc=572,freq=2.0), product of:
            0.17907447 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.031400457 = queryNorm
            0.22053158 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.0037419656 = weight(_text_:information in 572) [ClassicSimilarity], result of:
          0.0037419656 = score(doc=572,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.06788416 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.011110546 = weight(_text_:retrieval in 572) [ClassicSimilarity], result of:
          0.011110546 = score(doc=572,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.11697317 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.019110262 = weight(_text_:software in 572) [ClassicSimilarity], result of:
          0.019110262 = score(doc=572,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.15340936 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
      0.375 = coord(6/16)
    
    Abstract
    Mit der Verbreitung des Internets vermehrt sich die Menge der im World Wide Web verfügbaren Dokumente. Die Gewährleistung eines effizienten Zugangs zu gewünschten Informationen für die Internetbenutzer wird zu einer großen Herausforderung an die moderne Informationsgesellschaft. Eine Vielzahl von Werkzeugen wird bereits eingesetzt, um den Nutzern die Orientierung in der wachsenden Informationsflut zu erleichtern. Allerdings stellt die enorme Menge an unstrukturierten und verteilten Informationen nicht die einzige Schwierigkeit dar, die bei der Entwicklung von Werkzeugen dieser Art zu bewältigen ist. Die zunehmende Vielsprachigkeit von Web-Inhalten resultiert in dem Bedarf an Sprachidentifikations-Software, die Sprache/en von elektronischen Dokumenten zwecks gezielter Weiterverarbeitung identifiziert. Solche Sprachidentifizierer können beispielsweise effektiv im Bereich des Multilingualen Information Retrieval eingesetzt werden, da auf den Sprachidentifikationsergebnissen Prozesse der automatischen Indexbildung wie Stemming, Stoppwörterextraktion etc. aufbauen. In der vorliegenden Arbeit wird das neue System "LangIdent" zur Sprachidentifikation von elektronischen Textdokumenten vorgestellt, das in erster Linie für Lehre und Forschung an der Universität Hildesheim verwendet werden soll. "LangIdent" enthält eine Auswahl von gängigen Algorithmen zu der monolingualen Sprachidentifikation, die durch den Benutzer interaktiv ausgewählt und eingestellt werden können. Zusätzlich wurde im System ein neuer Algorithmus implementiert, der die Identifikation von Sprachen, in denen ein multilinguales Dokument verfasst ist, ermöglicht. Die Identifikation beschränkt sich nicht nur auf eine Aufzählung von gefundenen Sprachen, vielmehr wird der Text in monolinguale Abschnitte aufgeteilt, jeweils mit der Angabe der identifizierten Sprache.
  12. Steiner, E.S.: OPAC 2.0 : Mit Web 2.0-Technologie zum Bibliothekskatalog der Zukunft? (2007) 0.04
    0.040396634 = product of:
      0.32317308 = sum of:
        0.05911953 = weight(_text_:web in 678) [ClassicSimilarity], result of:
          0.05911953 = score(doc=678,freq=8.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.5769126 = fieldWeight in 678, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=678)
        0.26405355 = weight(_text_:2.0 in 678) [ClassicSimilarity], result of:
          0.26405355 = score(doc=678,freq=16.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            1.4499333 = fieldWeight in 678, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.0625 = fieldNorm(doc=678)
      0.125 = coord(2/16)
    
    Abstract
    Diese Arbeit besteht im Wesentlichen aus zwei Teilen: Der erste Teil beinhaltet eine Einführung in den Begriff Web 2.0 und in die allgemeinen Rahmenbedingungen der zugehörigen technischen Entwicklungen. Des Weiteren werden exemplarische Techniken vorgestellt, die Web 2.0 zugeschrieben werden können sowie einige beispielhafte Anwendungen. Im zweiten Teil wird die Diskussion um Bibliothek 2.0 aufgegriffen, um anschließend näher auf Web 2.0 Techniken in Bibliothekskatalogen bzw. den OPAC 2.0 einzugehen. Verschiedene Techniken, die im OPAC 2.0 Anwendung finden können, werden diskutiert und zuletzt werden einige beispielhafte OPACs vorgestellt.
  13. Kacmaz, E.: Konzeption und Erstellung eines Online-Nachschlagewerks für den Bereich Web Usability/Accessibility (2004) 0.04
    0.039674863 = product of:
      0.15869945 = sum of:
        0.041803822 = weight(_text_:web in 3699) [ClassicSimilarity], result of:
          0.041803822 = score(doc=3699,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.4079388 = fieldWeight in 3699, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3699)
        0.09026646 = weight(_text_:benutzer in 3699) [ClassicSimilarity], result of:
          0.09026646 = score(doc=3699,freq=2.0), product of:
            0.17907447 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.031400457 = queryNorm
            0.5040722 = fieldWeight in 3699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0625 = fieldNorm(doc=3699)
        0.018076118 = product of:
          0.036152236 = sum of:
            0.036152236 = weight(_text_:online in 3699) [ClassicSimilarity], result of:
              0.036152236 = score(doc=3699,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.37936267 = fieldWeight in 3699, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3699)
          0.5 = coord(1/2)
        0.008553064 = weight(_text_:information in 3699) [ClassicSimilarity], result of:
          0.008553064 = score(doc=3699,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.1551638 = fieldWeight in 3699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3699)
      0.25 = coord(4/16)
    
    Abstract
    Schrittweise wird der lexikographische Prozeß zur Entstehung eines Online-Nachschlagewerkes dargestellt mit Hilfe eines webbasierten Content Management Systems, dessen anvisierte Benutzer die Studenten des Studienganges Bibliotheks- und Informationsmanagement der Hochschule für Angewandte Wissenschaften Hamburg sein sollen. Selbst verfaßt werden Artikel zu Themen Accessibility und Web-Usability.
    Imprint
    Hamburg : Hochschule für Angewandte Wissenschaften, FB Bibliothek und Information
  14. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.04
    0.035057962 = product of:
      0.11218548 = sum of:
        0.027243135 = weight(_text_:wide in 2281) [ClassicSimilarity], result of:
          0.027243135 = score(doc=2281,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.1958137 = fieldWeight in 2281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.033048823 = weight(_text_:web in 2281) [ClassicSimilarity], result of:
          0.033048823 = score(doc=2281,freq=10.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.32250395 = fieldWeight in 2281, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.012095859 = weight(_text_:information in 2281) [ClassicSimilarity], result of:
          0.012095859 = score(doc=2281,freq=16.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.21943474 = fieldWeight in 2281, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.017957354 = weight(_text_:retrieval in 2281) [ClassicSimilarity], result of:
          0.017957354 = score(doc=2281,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.18905719 = fieldWeight in 2281, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.021840302 = weight(_text_:software in 2281) [ClassicSimilarity], result of:
          0.021840302 = score(doc=2281,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.17532499 = fieldWeight in 2281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
      0.3125 = coord(5/16)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.
  15. Glockner, M.: Semantik Web : Die nächste Generation des World Wide Web (2004) 0.03
    0.034401663 = product of:
      0.18347552 = sum of:
        0.09535098 = weight(_text_:wide in 4532) [ClassicSimilarity], result of:
          0.09535098 = score(doc=4532,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.685348 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
        0.07315669 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.07315669 = score(doc=4532,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.71389294 = fieldWeight in 4532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
        0.0149678625 = weight(_text_:information in 4532) [ClassicSimilarity], result of:
          0.0149678625 = score(doc=4532,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.27153665 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
      0.1875 = coord(3/16)
    
    Imprint
    Potsdam : Fachhochschule, Institut für Information und Dokumentation
  16. Külper, U.; Will, G.: ¬Das Projekt Bücherschatz : interdisziplinäre und partizipative Entwicklung eines kindgerechten Bibliotheks-Online-Kataloges (1996) 0.03
    0.03396791 = product of:
      0.13587163 = sum of:
        0.07898315 = weight(_text_:benutzer in 4725) [ClassicSimilarity], result of:
          0.07898315 = score(doc=4725,freq=2.0), product of:
            0.17907447 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.031400457 = queryNorm
            0.44106317 = fieldWeight in 4725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4725)
        0.0111840265 = product of:
          0.022368053 = sum of:
            0.022368053 = weight(_text_:online in 4725) [ClassicSimilarity], result of:
              0.022368053 = score(doc=4725,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.23471867 = fieldWeight in 4725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4725)
          0.5 = coord(1/2)
        0.0074839313 = weight(_text_:information in 4725) [ClassicSimilarity], result of:
          0.0074839313 = score(doc=4725,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13576832 = fieldWeight in 4725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4725)
        0.038220525 = weight(_text_:software in 4725) [ClassicSimilarity], result of:
          0.038220525 = score(doc=4725,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.30681872 = fieldWeight in 4725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4725)
      0.25 = coord(4/16)
    
    Abstract
    Im Jahr 1995 entstand in interdisziplinärer Zusammenarbeit der Prototyp Bücherschatz, ein Bibliotheks-Online_katalog für Kinder. Beteiligt waren Studierende und eine Professorin der FH Hamburg, Fb Bibliothek und Information, ein Designer und 2 Informatikerinnen der Universität Hamburg. In diesem Bericht werden sowohl das Produkt Bücherschatz als auch der Prozeß seiner Entwicklung beschrieben. Ein Schwerpunkt liegt in der Auseinandersetzung mit theoretischen Modellen der Softwaretechnik - hier STEPS und Prototyping - und ihrer Anpassung an konkrete Projekterfordernisse. Weiterhin werden Fragen nach der Gestaltung kindgerechter Software, der Organisation eines großen Projektteams und nach der Art der Partizipation der Benutzer thematisiert. Das Gesamtprojekt wird in einen wissenschaftlichen Kontext der Informatik eingeordnet, und zentrale Erfahrungen und Erkenntnisse hinsichtlich interdisziplinärer und partizipativer Softwareentwicklung werden zusammengefaßt
  17. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.03
    0.031611465 = product of:
      0.12644586 = sum of:
        0.040864702 = weight(_text_:wide in 4333) [ClassicSimilarity], result of:
          0.040864702 = score(doc=4333,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.29372054 = fieldWeight in 4333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.04957324 = weight(_text_:web in 4333) [ClassicSimilarity], result of:
          0.04957324 = score(doc=4333,freq=10.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.48375595 = fieldWeight in 4333, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.009071894 = weight(_text_:information in 4333) [ClassicSimilarity], result of:
          0.009071894 = score(doc=4333,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.16457605 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.02693603 = weight(_text_:retrieval in 4333) [ClassicSimilarity], result of:
          0.02693603 = score(doc=4333,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.2835858 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.25 = coord(4/16)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Theme
    Semantic Web
  18. Wessel, S.: ¬Die Retrieval-Software von Online Computer Systems : vergleichende Darstellung ausgewählter CD-ROM-Bibliographien (1992) 0.03
    0.030696705 = product of:
      0.16371576 = sum of:
        0.02556349 = product of:
          0.05112698 = sum of:
            0.05112698 = weight(_text_:online in 2618) [ClassicSimilarity], result of:
              0.05112698 = score(doc=2618,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.5364998 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.125 = fieldNorm(doc=2618)
          0.5 = coord(1/2)
        0.050791066 = weight(_text_:retrieval in 2618) [ClassicSimilarity], result of:
          0.050791066 = score(doc=2618,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.5347345 = fieldWeight in 2618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=2618)
        0.08736121 = weight(_text_:software in 2618) [ClassicSimilarity], result of:
          0.08736121 = score(doc=2618,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.70129997 = fieldWeight in 2618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.125 = fieldNorm(doc=2618)
      0.1875 = coord(3/16)
    
  19. Siever, C.M.: Multimodale Kommunikation im Social Web : Forschungsansätze und Analysen zu Text-Bild-Relationen (2015) 0.03
    0.02930752 = product of:
      0.15630677 = sum of:
        0.026127389 = weight(_text_:web in 4056) [ClassicSimilarity], result of:
          0.026127389 = score(doc=4056,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.25496176 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.09157081 = weight(_text_:soziale in 4056) [ClassicSimilarity], result of:
          0.09157081 = score(doc=4056,freq=4.0), product of:
            0.19184545 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031400457 = queryNorm
            0.47731552 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
        0.038608566 = weight(_text_:software in 4056) [ClassicSimilarity], result of:
          0.038608566 = score(doc=4056,freq=4.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.30993375 = fieldWeight in 4056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4056)
      0.1875 = coord(3/16)
    
    Abstract
    Multimodalität ist ein typisches Merkmal der Kommunikation im Social Web. Der Fokus dieses Bandes liegt auf der Kommunikation in Foto-Communitys, insbesondere auf den beiden kommunikativen Praktiken des Social Taggings und des Verfassens von Notizen innerhalb von Bildern. Bei den Tags stehen semantische Text-Bild-Relationen im Vordergrund: Tags dienen der Wissensrepräsentation, eine adäquate Versprachlichung der Bilder ist folglich unabdingbar. Notizen-Bild-Relationen sind aus pragmatischer Perspektive von Interesse: Die Informationen eines Kommunikats werden komplementär auf Text und Bild verteilt, was sich in verschiedenen sprachlichen Phänomenen niederschlägt. Ein diachroner Vergleich mit der Postkartenkommunikation sowie ein Exkurs zur Kommunikation mit Emojis runden das Buch ab.
    RSWK
    Text / Bild / Computerunterstützte Kommunikation / Soziale Software (SBB)
    Subject
    Text / Bild / Computerunterstützte Kommunikation / Soziale Software (SBB)
  20. Pfeiffer, S.: Entwicklung einer Ontologie für die wissensbasierte Erschließung des ISDC-Repository und die Visualisierung kontextrelevanter semantischer Zusammenhänge (2010) 0.03
    0.026145682 = product of:
      0.10458273 = sum of:
        0.023837745 = weight(_text_:wide in 4658) [ClassicSimilarity], result of:
          0.023837745 = score(doc=4658,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.171337 = fieldWeight in 4658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
        0.031677775 = weight(_text_:web in 4658) [ClassicSimilarity], result of:
          0.031677775 = score(doc=4658,freq=12.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.3091247 = fieldWeight in 4658, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
        0.0037419656 = weight(_text_:information in 4658) [ClassicSimilarity], result of:
          0.0037419656 = score(doc=4658,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.06788416 = fieldWeight in 4658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
        0.04532524 = weight(_text_:soziale in 4658) [ClassicSimilarity], result of:
          0.04532524 = score(doc=4658,freq=2.0), product of:
            0.19184545 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031400457 = queryNorm
            0.23625913 = fieldWeight in 4658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
      0.25 = coord(4/16)
    
    Abstract
    In der heutigen Zeit sind Informationen jeglicher Art über das World Wide Web (WWW) für eine breite Bevölkerungsschicht zugänglich. Dabei ist es jedoch schwierig die existierenden Dokumente auch so aufzubereiten, dass die Inhalte für Maschinen inhaltlich interpretierbar sind. Das Semantic Web, eine Weiterentwicklung des WWWs, möchte dies ändern, indem es Webinhalte in maschinenverständlichen Formaten anbietet. Dadurch können Automatisierungsprozesse für die Suchanfragenoptimierung und für die Wissensbasenvernetzung eingesetzt werden. Die Web Ontology Language (OWL) ist eine mögliche Sprache, in der Wissen beschrieben und gespeichert werden kann (siehe Kapitel 4 OWL). Das Softwareprodukt Protégé unterstützt den Standard OWL, weshalb ein Großteil der Modellierungsarbeiten in Protégé durchgeführt wurde. Momentan erhält der Nutzer in den meisten Fällen bei der Informationsfindung im Internet lediglich Unterstützung durch eine von Suchmaschinenbetreibern vorgenommene Verschlagwortung des Dokumentinhaltes, d.h. Dokumente können nur nach einem bestimmten Wort oder einer bestimmten Wortgruppe durchsucht werden. Die Ausgabeliste der Suchergebnisse muss dann durch den Nutzer selbst gesichtet und nach Relevanz geordnet werden. Das kann ein sehr zeit- und arbeitsintensiver Prozess sein. Genau hier kann das Semantic Web einen erheblichen Beitrag in der Informationsaufbereitung für den Nutzer leisten, da die Ausgabe der Suchergebnisse bereits einer semantischen Überprüfung und Verknüpfung unterliegt. Deshalb fallen hier nicht relevante Informationsquellen von vornherein bei der Ausgabe heraus, was das Finden von gesuchten Dokumenten und Informationen in einem bestimmten Wissensbereich beschleunigt.
    Um die Vernetzung von Daten, Informationen und Wissen imWWWzu verbessern, werden verschiedene Ansätze verfolgt. Neben dem Semantic Web mit seinen verschiedenen Ausprägungen gibt es auch andere Ideen und Konzepte, welche die Verknüpfung von Wissen unterstützen. Foren, soziale Netzwerke und Wikis sind eine Möglichkeit des Wissensaustausches. In Wikis wird Wissen in Form von Artikeln gebündelt, um es so einer breiten Masse zur Verfügung zu stellen. Hier angebotene Informationen sollten jedoch kritisch hinterfragt werden, da die Autoren der Artikel in den meisten Fällen keine Verantwortung für die dort veröffentlichten Inhalte übernehmen müssen. Ein anderer Weg Wissen zu vernetzen bietet das Web of Linked Data. Hierbei werden strukturierte Daten des WWWs durch Verweise auf andere Datenquellen miteinander verbunden. Der Nutzer wird so im Zuge der Suche auf themenverwandte und verlinkte Datenquellen verwiesen. Die geowissenschaftlichen Metadaten mit ihren Inhalten und Beziehungen untereinander, die beim GFZ unter anderem im Information System and Data Center (ISDC) gespeichert sind, sollen als Ontologie in dieser Arbeit mit den Sprachkonstrukten von OWL modelliert werden. Diese Ontologie soll die Repräsentation und Suche von ISDC-spezifischem Domänenwissen durch die semantische Vernetzung persistenter ISDC-Metadaten entscheidend verbessern. Die in dieser Arbeit aufgezeigten Modellierungsmöglichkeiten, zunächst mit der Extensible Markup Language (XML) und später mit OWL, bilden die existierenden Metadatenbestände auf einer semantischen Ebene ab (siehe Abbildung 2). Durch die definierte Nutzung der Semantik, die in OWL vorhanden ist, kann mittels Maschinen ein Mehrwert aus den Metadaten gewonnen und dem Nutzer zur Verfügung gestellt werden. Geowissenschaftliche Informationen, Daten und Wissen können in semantische Zusammenhänge gebracht und verständlich repräsentiert werden. Unterstützende Informationen können ebenfalls problemlos in die Ontologie eingebunden werden. Dazu gehören z.B. Bilder zu den im ISDC gespeicherten Instrumenten, Plattformen oder Personen. Suchanfragen bezüglich geowissenschaftlicher Phänomene können auch ohne Expertenwissen über Zusammenhänge und Begriffe gestellt und beantwortet werden. Die Informationsrecherche und -aufbereitung gewinnt an Qualität und nutzt die existierenden Ressourcen im vollen Umfang.

Languages

  • d 301
  • e 41
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 22
  • m 16
  • r 2
  • a 1
  • More… Less…

Themes

Subjects