Search (334 results, page 1 of 17)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.45
    0.44953653 = product of:
      1.2587023 = sum of:
        0.096823245 = product of:
          0.29046974 = sum of:
            0.29046974 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.29046974 = score(doc=973,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.29046974 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.29046974 = score(doc=973,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.29046974 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.29046974 = score(doc=973,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.29046974 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.29046974 = score(doc=973,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.29046974 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.29046974 = score(doc=973,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.35714287 = coord(5/14)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.38
    0.3754033 = product of:
      0.6569558 = sum of:
        0.043041058 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.043041058 = score(doc=563,freq=8.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.43268442 = fieldWeight in 563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14523487 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14523487 = score(doc=563,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14523487 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14523487 = score(doc=563,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.006226926 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.006226926 = score(doc=563,freq=2.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.018488824 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.018488824 = score(doc=563,freq=2.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14523487 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14523487 = score(doc=563,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14523487 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14523487 = score(doc=563,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.008259461 = product of:
          0.024778383 = sum of:
            0.024778383 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.024778383 = score(doc=563,freq=2.0), product of:
                0.10673865 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030480823 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5714286 = coord(8/14)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.36
    0.35854426 = product of:
      0.62745243 = sum of:
        0.040343024 = product of:
          0.121029064 = sum of:
            0.121029064 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.121029064 = score(doc=1000,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.017933775 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.017933775 = score(doc=1000,freq=2.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.121029064 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.121029064 = score(doc=1000,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.121029064 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.121029064 = score(doc=1000,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.007338503 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.007338503 = score(doc=1000,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.121029064 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.121029064 = score(doc=1000,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.121029064 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.121029064 = score(doc=1000,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.07772087 = weight(_text_:wien in 1000) [ClassicSimilarity], result of:
          0.07772087 = score(doc=1000,freq=4.0), product of:
            0.17413543 = queryWeight, product of:
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.030480823 = queryNorm
            0.4463243 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5714286 = coord(8/14)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  4. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.31
    0.3081914 = product of:
      0.6163828 = sum of:
        0.032274418 = product of:
          0.09682325 = sum of:
            0.09682325 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.09682325 = score(doc=5820,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.13692875 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13692875 = score(doc=5820,freq=4.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13692875 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13692875 = score(doc=5820,freq=4.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.011741605 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.011741605 = score(doc=5820,freq=16.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.024651768 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.024651768 = score(doc=5820,freq=8.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13692875 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13692875 = score(doc=5820,freq=4.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13692875 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13692875 = score(doc=5820,freq=4.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(7/14)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  5. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.28
    0.2848173 = product of:
      0.49843025 = sum of:
        0.032274418 = product of:
          0.09682325 = sum of:
            0.09682325 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.09682325 = score(doc=701,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.02028975 = weight(_text_:web in 701) [ClassicSimilarity], result of:
          0.02028975 = score(doc=701,freq=4.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.2039694 = fieldWeight in 701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.09682325 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09682325 = score(doc=701,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.09682325 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09682325 = score(doc=701,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.012453851 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.012453851 = score(doc=701,freq=18.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.04611923 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.04611923 = score(doc=701,freq=28.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.09682325 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09682325 = score(doc=701,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.09682325 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.09682325 = score(doc=701,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5714286 = coord(8/14)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Theme
    Semantic Web
  6. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.28
    0.28143 = product of:
      0.56286 = sum of:
        0.040343024 = product of:
          0.121029064 = sum of:
            0.121029064 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.121029064 = score(doc=4997,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.03106221 = weight(_text_:web in 4997) [ClassicSimilarity], result of:
          0.03106221 = score(doc=4997,freq=6.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.3122631 = fieldWeight in 4997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.121029064 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4997,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.121029064 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4997,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.007338503 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
          0.007338503 = score(doc=4997,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.13714671 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.121029064 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4997,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.121029064 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4997,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(7/14)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  7. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.23
    0.23245418 = product of:
      0.5423931 = sum of:
        0.040343024 = product of:
          0.121029064 = sum of:
            0.121029064 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.121029064 = score(doc=4388,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.017933775 = weight(_text_:web in 4388) [ClassicSimilarity], result of:
          0.017933775 = score(doc=4388,freq=2.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.18028519 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.121029064 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4388,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.121029064 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4388,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.121029064 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4388,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.121029064 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.121029064 = score(doc=4388,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.42857143 = coord(6/14)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  8. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.19
    0.1873069 = product of:
      0.5244593 = sum of:
        0.040343024 = product of:
          0.121029064 = sum of:
            0.121029064 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.121029064 = score(doc=855,freq=2.0), product of:
                0.25841674 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030480823 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.121029064 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.121029064 = score(doc=855,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.121029064 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.121029064 = score(doc=855,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.121029064 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.121029064 = score(doc=855,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.121029064 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.121029064 = score(doc=855,freq=2.0), product of:
            0.25841674 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030480823 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.35714287 = coord(5/14)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  9. Toussi, M.: Information Retrieval am Beispiel der Wide Area Information Server (WAIS) und dem World Wide Web (WWW) (1996) 0.07
    0.06994295 = product of:
      0.24480033 = sum of:
        0.13089736 = weight(_text_:wide in 5965) [ClassicSimilarity], result of:
          0.13089736 = score(doc=5965,freq=4.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.9692284 = fieldWeight in 5965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.050214574 = weight(_text_:web in 5965) [ClassicSimilarity], result of:
          0.050214574 = score(doc=5965,freq=2.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.50479853 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.020547807 = weight(_text_:information in 5965) [ClassicSimilarity], result of:
          0.020547807 = score(doc=5965,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.3840108 = fieldWeight in 5965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.04314059 = weight(_text_:retrieval in 5965) [ClassicSimilarity], result of:
          0.04314059 = score(doc=5965,freq=2.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.46789268 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
      0.2857143 = coord(4/14)
    
  10. Knitel, M.: ¬The application of linked data principles to library data : opportunities and challenges (2012) 0.04
    0.038871348 = product of:
      0.13604972 = sum of:
        0.033056572 = weight(_text_:wide in 599) [ClassicSimilarity], result of:
          0.033056572 = score(doc=599,freq=2.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.24476713 = fieldWeight in 599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=599)
        0.017933775 = weight(_text_:web in 599) [ClassicSimilarity], result of:
          0.017933775 = score(doc=599,freq=2.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.18028519 = fieldWeight in 599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=599)
        0.007338503 = weight(_text_:information in 599) [ClassicSimilarity], result of:
          0.007338503 = score(doc=599,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.13714671 = fieldWeight in 599, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=599)
        0.07772087 = weight(_text_:wien in 599) [ClassicSimilarity], result of:
          0.07772087 = score(doc=599,freq=4.0), product of:
            0.17413543 = queryWeight, product of:
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.030480823 = queryNorm
            0.4463243 = fieldWeight in 599, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.0390625 = fieldNorm(doc=599)
      0.2857143 = coord(4/14)
    
    Abstract
    Linked Data hat sich im Laufe der letzten Jahre zu einem vorherrschenden Thema der Bibliothekswissenschaft entwickelt. Als ein Standard für Erfassung und Austausch von Daten, bestehen zahlreiche Berührungspunkte mit traditionellen bibliothekarischen Techniken. Diese Arbeit stellt in einem ersten Teil die grundlegenden Technologien dieses neuen Paradigmas vor, um sodann deren Anwendung auf bibliothekarische Daten zu untersuchen. Den zentralen Prinzipien der Linked Data Initiative folgend, werden dabei die Adressierung von Entitäten durch URIs, die Anwendung des RDF Datenmodells und die Verknüpfung von heterogenen Datenbeständen näher beleuchtet. Den dabei zu Tage tretenden Herausforderungen der Sicherstellung von qualitativ hochwertiger Information, der permanenten Adressierung von Inhalten im World Wide Web sowie Problemen der Interoperabilität von Metadatenstandards wird dabei besondere Aufmerksamkeit geschenkt. Der letzte Teil der Arbeit skizziert ein Programm, welches eine mögliche Erweiterung der Suchmaschine des österreichischen Bibliothekenverbundes darstellt. Dessen prototypische Umsetzung erlaubt eine realistische Einschätzung der derzeitigen Möglichkeiten von Linked Data und unterstreicht viele der vorher theoretisch erarbeiteten Themengebiete. Es zeigt sich, dass für den voll produktiven Einsatz von Linked Data noch viele Hürden zu überwinden sind. Insbesondere befinden sich viele Projekte derzeit noch in einem frühen Reifegrad. Andererseits sind die Möglichkeiten, die aus einem konsequenten Einsatz von RDF resultieren würden, vielversprechend. RDF qualifiziert sich somit als Kandidat für den Ersatz von auslaufenden bibliographischen Datenformaten wie MAB oder MARC.
    Footnote
    Wien, Univ., Lehrgang Library and Information Studies, Master-Thesis, 2012.
    Imprint
    Wien : Universität / ÖNB
  11. Glockner, M.: Semantik Web : Die nächste Generation des World Wide Web (2004) 0.04
    0.03816472 = product of:
      0.17810203 = sum of:
        0.09255841 = weight(_text_:wide in 4532) [ClassicSimilarity], result of:
          0.09255841 = score(doc=4532,freq=2.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.685348 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
        0.07101413 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.07101413 = score(doc=4532,freq=4.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.71389294 = fieldWeight in 4532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
        0.014529495 = weight(_text_:information in 4532) [ClassicSimilarity], result of:
          0.014529495 = score(doc=4532,freq=2.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.27153665 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
      0.21428572 = coord(3/14)
    
    Imprint
    Potsdam : Fachhochschule, Institut für Information und Dokumentation
  12. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.04
    0.03506932 = product of:
      0.122742616 = sum of:
        0.039667886 = weight(_text_:wide in 4333) [ClassicSimilarity], result of:
          0.039667886 = score(doc=4333,freq=2.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.29372054 = fieldWeight in 4333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.048121374 = weight(_text_:web in 4333) [ClassicSimilarity], result of:
          0.048121374 = score(doc=4333,freq=10.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.48375595 = fieldWeight in 4333, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.0088062035 = weight(_text_:information in 4333) [ClassicSimilarity], result of:
          0.0088062035 = score(doc=4333,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.16457605 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.026147148 = weight(_text_:retrieval in 4333) [ClassicSimilarity], result of:
          0.026147148 = score(doc=4333,freq=4.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.2835858 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.2857143 = coord(4/14)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Theme
    Semantic Web
  13. Jarmuth, B.: Klassifikation Weiterbildung : Erstellung eines Datenbanken geeigneten Erschließungs- und Retrieval-Systems (1992) 0.03
    0.033585105 = product of:
      0.15673049 = sum of:
        0.09082182 = weight(_text_:bibliothek in 6330) [ClassicSimilarity], result of:
          0.09082182 = score(doc=6330,freq=2.0), product of:
            0.12513994 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.030480823 = queryNorm
            0.72576207 = fieldWeight in 6330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.125 = fieldNorm(doc=6330)
        0.016605137 = weight(_text_:information in 6330) [ClassicSimilarity], result of:
          0.016605137 = score(doc=6330,freq=2.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.3103276 = fieldWeight in 6330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=6330)
        0.049303535 = weight(_text_:retrieval in 6330) [ClassicSimilarity], result of:
          0.049303535 = score(doc=6330,freq=2.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.5347345 = fieldWeight in 6330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=6330)
      0.21428572 = coord(3/14)
    
    Imprint
    Hamburg : Fb Bibliothek und Information
  14. Woitas, K.: Bibliografische Daten, Normdaten und Metadaten im Semantic Web : Konzepte der bibliografischen Kontrolle im Wandel (2010) 0.03
    0.029144775 = product of:
      0.13600895 = sum of:
        0.04674906 = weight(_text_:wide in 115) [ClassicSimilarity], result of:
          0.04674906 = score(doc=115,freq=4.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.34615302 = fieldWeight in 115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
        0.04010114 = weight(_text_:web in 115) [ClassicSimilarity], result of:
          0.04010114 = score(doc=115,freq=10.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.40312994 = fieldWeight in 115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
        0.049158752 = weight(_text_:bibliothek in 115) [ClassicSimilarity], result of:
          0.049158752 = score(doc=115,freq=6.0), product of:
            0.12513994 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.030480823 = queryNorm
            0.39283025 = fieldWeight in 115, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
      0.21428572 = coord(3/14)
    
    Abstract
    Bibliografische Daten, Normdaten und Metadaten im Semantic Web - Konzepte der Bibliografischen Kontrolle im Wandel. Der Titel dieser Arbeit zielt in ein essentielles Feld der Bibliotheks- und Informationswissenschaft, die Bibliografische Kontrolle. Als zweites zentrales Konzept wird der in der Weiterentwicklung des World Wide Webs (WWW) bedeutsame Begriff des Semantic Webs genannt. Auf den ersten Blick handelt es sich hier um einen ungleichen Wettstreit. Auf der einen Seite die Bibliografische Kontrolle, welche die Methoden und Mittel zur Erschließung von bibliothekarischen Objekten umfasst und traditionell in Form von formal-inhaltlichen Surrogaten in Katalogen daherkommt. Auf der anderen Seite das Buzzword Semantic Web mit seinen hochtrabenden Konnotationen eines durch Selbstreferenzialität "bedeutungstragenden", wenn nicht sogar "intelligenten" Webs. Wie kamen also eine wissenschaftliche Bibliothekarin und ein Mitglied des World Wide Web Consortiums 2007 dazu, gemeinsam einen Aufsatz zu publizieren und darin zu behaupten, das semantische Netz würde ein "bibliothekarischeres" Netz sein? Um sich dieser Frage zu nähern, soll zunächst kurz die historische Entwicklung der beiden Informationssphären Bibliothek und WWW gemeinsam betrachtet werden. Denn so oft - und völlig zurecht - die informationelle Revolution durch das Internet beschworen wird, so taucht auch immer wieder das Analogon einer weltweiten, virtuellen Bibliothek auf. Genauer gesagt, nahmen die theoretischen Überlegungen, die später zur Entwicklung des Internets führen sollten, ihren Ausgangspunkt (neben Kybernetik und entstehender Computertechnik) beim Konzept des Informationsspeichers Bibliothek.
    Theme
    Semantic Web
  15. Liebwald, D.: Evaluierung juristischer Datenbanken (2003) 0.03
    0.02890097 = product of:
      0.10115339 = sum of:
        0.02635933 = weight(_text_:elektronische in 2490) [ClassicSimilarity], result of:
          0.02635933 = score(doc=2490,freq=2.0), product of:
            0.14414315 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.030480823 = queryNorm
            0.18286912 = fieldWeight in 2490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2490)
        0.005136952 = weight(_text_:information in 2490) [ClassicSimilarity], result of:
          0.005136952 = score(doc=2490,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.0960027 = fieldWeight in 2490, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2490)
        0.015252502 = weight(_text_:retrieval in 2490) [ClassicSimilarity], result of:
          0.015252502 = score(doc=2490,freq=4.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.16542503 = fieldWeight in 2490, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2490)
        0.05440461 = weight(_text_:wien in 2490) [ClassicSimilarity], result of:
          0.05440461 = score(doc=2490,freq=4.0), product of:
            0.17413543 = queryWeight, product of:
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.030480823 = queryNorm
            0.31242698 = fieldWeight in 2490, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2490)
      0.2857143 = coord(4/14)
    
    Footnote
    Rez. in Mitt. VÖB 57(2004) H.2, S.71-73 (J. Pauser):"Bei der hier zu besprechenden Arbeit handelt es sich um die Ende 2003 erfolgte Drucklegung einer juristischen Dissertation an der Universität Wien. Die Autorin zielt darauf ab, "Grundlagen, Entstehung, verschiedene Ansätze und Entwicklungstendenzen desjuristischen Information Retrieval aufzuzeigen [...], um schließlich die Qualität der wichtigsten österreichischen Rechtsdatenbanken anhand der gewonnenen Erkenntnisse messen zu können". Das gewählte Thema ist spannend und wohl für jeden Informationswissenschaftler und praktischen Juristen von Relevanz. Elektronische Datenbanken mit Rechtsinformation, seien sie nun online oder offline, revolutionieren seit geraumer Zeit die juristische Arbeit nicht nur in Österreich. Das Recherchieren mittels dieser neuen "Werkzeuge" gehört bereits standardmäßig zur Grundausbildung eines jedes Juristen. Die Kenntnis der umfassenden Möglichkeiten dieser neuen juristischen Informationsquellen beeinflusst massiv die Qualität und vor allem Schnelligkeit des juristischen Arbeitens. Vor diesem Hintergrund ist es immens wichtig, dass die juristischen Datenbanken den Bedürfnissen der Nutzer möglichst zweckmäßig entgegenkommen. Doris Liebwald definiert im ersten Teil ihrer Arbeit den Begriff "Information Retrieval" als "Repräsentation, Speicherung und Organisation von Informationen und der Zugriff auf Informationen" und versucht anschließend Bewertungskriterien für Rechtsdatenbanken aufzustellen. Hinsichtlich der Daten (Gesetze, Gerichtsentscheidungen, juristische Literatur) fordert sie "Vollständigkeit", "Aktualität" und "Authentizität"; in technischer Sicht ergeben sich "Inhaltserschließung", "Suchfunktionen/-möglichkeiten", sowie die "Benutzerfreundlichkeit der Systemoberfläche" etwa durch einfache Bedienbarkeit, Verständlichkeit, Anreicherung durch Hilfefunktionen usw. als Bewertungskriterien, Schließlich sind aus praktisch-ökonomischer Sicht noch "Kosten" und der "Support" aufgenommen.
    Imprint
    Wien : Verl. Österreich
  16. Kaluza, H.: Methoden und Verfahren bei der Archivierung von Internetressourcen : "The Internet Archive" und PANDORA (2002) 0.03
    0.027794387 = product of:
      0.09728035 = sum of:
        0.037399244 = weight(_text_:wide in 973) [ClassicSimilarity], result of:
          0.037399244 = score(doc=973,freq=4.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.2769224 = fieldWeight in 973, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
        0.024849769 = weight(_text_:web in 973) [ClassicSimilarity], result of:
          0.024849769 = score(doc=973,freq=6.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.24981049 = fieldWeight in 973, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
        0.022705454 = weight(_text_:bibliothek in 973) [ClassicSimilarity], result of:
          0.022705454 = score(doc=973,freq=2.0), product of:
            0.12513994 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.030480823 = queryNorm
            0.18144052 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
        0.012325884 = weight(_text_:retrieval in 973) [ClassicSimilarity], result of:
          0.012325884 = score(doc=973,freq=2.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.13368362 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
      0.2857143 = coord(4/14)
    
    Content
    "Die vorliegende Arbeit befasst sich mit den Methoden und Verfahren bei der Archivierung von Internetressourcen. Ziel ist es, anhand einer vergleichenden Beschreibung zweier zur Zeit aktiver, bzw. im Aufbau befindlicher Projekte, die Grundprobleme dieser speziellen Art der Archivierung darzustellen und deren unterschiedliche Vorgehensweisen beim Aufbau des Archivs zu beschreiben und zu vergleichen. Daraus erfolgt eine Diskussion über grundsätzliche Fragestellungen zu diesem Thema. Hierzu ist es vonnöten, zuerst auf das besondere Medium Internet, insbesondere auf das World Wide Web (WWW), einzugehen, sowie dessen Geschichte und Entstehung zu betrachten. Weiterhin soll ein besonderes Augenmerk auf die Datenmenge, die Datenstruktur und die Datentypen (hier vor allem im World Wide Web) gelegt werden. Da die daraus entstehenden Probleme für Erschließung und Retrieval, die Qualität und die Fluktuation der Angebote im Web eine wichtige Rolle im Rahmen der Archivierung von Internetressourcen darstellen, werden diese gesondert mittels kurzer Beschreibungen bestimmter Instrumente und Projekte zur Lösung derselben beschrieben. Hier finden insbesondere Suchmaschinen und Webkataloge, deren Arbeitsweise und Aufbau besondere Beachtung. Weiterhin sollen die "Virtuelle Bibliothek" und das "Dublin Core"- Projekt erläutert werden. Auf dieser Basis wird dann speziell auf das allgemeine Thema der Archivierung von Internetressourcen eingegangen. Ihre Grundgedanken und ihre Ziele sollen beschrieben und erste Diskussionsfragen und Diskrepanzen aufgezeigt werden. Ein besonderes Augenmerk gilt hier vor allem den technischen und rechtlichen Problemen, sowie Fragen des Jugendschutzes und der Zugänglichkeit zu mittlerweile verbotenen Inhalten. Einzelne Methoden der Archivierung, die vor allem im folgenden Teil anhand von Beispielen Beachtung finden, werden kurz vorgestellt. Im darauf folgenden Teil werden zwei Archivierungsprojekte detailliert beschrieben und analysiert. Einem einführenden Überblick über das jeweilige Projekt, folgen detaillierte Beschreibungen zu Projektverlauf, Philosophie und Vorgehensweise. Die Datenbasis und das Angebot, sowie die Funktionalitäten werden einer genauen Untersuchung unterzogen. Stärken und Schwächen werden genannt, und wenn möglich, untereinander verglichen. Hier ist vor allem auch die Frage von Bedeutung, ob das Angebot a) den Ansprüchen und Zielsetzungen des Anbieters genügt, und ob es b) den allgemeinen Grundfragen der Archivierung von Internetressourcen gleichkommt, die in Kapitel 3 genannt worden sind. Auf Basis aller Teile soll dann abschließend der derzeitige Stand im Themengebiet diskutiert werden. Die Arbeit schließt mit einer endgültigen Bewertung und alternativen Lösungen."
  17. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.03
    0.027326863 = product of:
      0.09564402 = sum of:
        0.02337453 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.02337453 = score(doc=3173,freq=4.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.029739805 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.029739805 = score(doc=3173,freq=22.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.0036692515 = weight(_text_:information in 3173) [ClassicSimilarity], result of:
          0.0036692515 = score(doc=3173,freq=4.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.068573356 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.038860437 = weight(_text_:wien in 3173) [ClassicSimilarity], result of:
          0.038860437 = score(doc=3173,freq=4.0), product of:
            0.17413543 = queryWeight, product of:
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.030480823 = queryNorm
            0.22316214 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7129507 = idf(docFreq=396, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.2857143 = coord(4/14)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
    Footnote
    Dissertation zum Doktor der technischen Wissenschaften an der Universität Wien.
    Imprint
    Wien : Universität
  18. Amon, H.: Optimierung von Webseiten für Suchmaschinen und Kataloge : Empfehlungen zur Optimierung der Web-Seiten der Bibliothek und Dokumentation der Deutschen Gesellschaft für Auswärtige Politik (DGAP) (2004) 0.03
    0.026488129 = product of:
      0.12361127 = sum of:
        0.043041058 = weight(_text_:web in 4626) [ClassicSimilarity], result of:
          0.043041058 = score(doc=4626,freq=2.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.43268442 = fieldWeight in 4626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=4626)
        0.06811636 = weight(_text_:bibliothek in 4626) [ClassicSimilarity], result of:
          0.06811636 = score(doc=4626,freq=2.0), product of:
            0.12513994 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.030480823 = queryNorm
            0.54432154 = fieldWeight in 4626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.09375 = fieldNorm(doc=4626)
        0.012453852 = weight(_text_:information in 4626) [ClassicSimilarity], result of:
          0.012453852 = score(doc=4626,freq=2.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.23274569 = fieldWeight in 4626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4626)
      0.21428572 = coord(3/14)
    
    Imprint
    Potsdam : Fachhochschule, Institut für Information und Dokumentation
  19. Nimz, B.: ¬Die Erschließung im Archiv- und Bibliothekswesen unter besonderer Berücksichtigung elektronischer Informationsträger : ein Vergleich im Interesse der Professionalisierung und Harmonisierung (2001) 0.03
    0.026396018 = product of:
      0.12318141 = sum of:
        0.07970312 = weight(_text_:elektronische in 2442) [ClassicSimilarity], result of:
          0.07970312 = score(doc=2442,freq=14.0), product of:
            0.14414315 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.030480823 = queryNorm
            0.55294424 = fieldWeight in 2442, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.03125 = fieldNorm(doc=2442)
        0.039327003 = weight(_text_:bibliothek in 2442) [ClassicSimilarity], result of:
          0.039327003 = score(doc=2442,freq=6.0), product of:
            0.12513994 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.030480823 = queryNorm
            0.3142642 = fieldWeight in 2442, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.03125 = fieldNorm(doc=2442)
        0.004151284 = weight(_text_:information in 2442) [ClassicSimilarity], result of:
          0.004151284 = score(doc=2442,freq=2.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.0775819 = fieldWeight in 2442, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2442)
      0.21428572 = coord(3/14)
    
    Abstract
    Diese Arbeit dient der Professionalisierung und Harmonisierung der Erschließungsmethodik im Archiv- und Bibliothekswesen. Die Erschließung ist das Kernstück der archivarischen und bibliothekarischen Arbeit und Grundlage für die Benutzung durch die interessierte Öffentlichkeit. Hier wird, bildlich gesprochen, das gesät, was in der Benutzung geerntet wird und je gewissenhafter die Aussaat desto ertragreicher die Ernte. Der Bereich der Dokumentation wird, wo es für die Betrachtung der integrativen Momente in den Informationswissenschaften erforderlich erscheint, einbezogen. Das Hauptaugenmerk gilt jedoch der Archivwissenschaft und den Beziehungen zwischen der Archiv- und Bibliothekswissenschaft. Vornehmlich wird die Arbeit nationale Strukturen des Archivund Bibliothekswesens sowie ausgewählte Projekte und Tendenzen abhandeln. Auf eine erschöpfende Untersuchung aller Ausbildungsgänge auf dem Informationssektor wird verzichtet, da das Ziel dieser Arbeit nur die Betrachtung der integrativen Konzepte in der Ausbildung ist. Ziel der Publikation ist es, Angebote sowohl für die Grundlagenforschung als auch für die angewandte Forschung im Themenbereich Harmonisierung und Professionalisierung in der Erschließung der Informationswissenschaften zu machen. Sie kann als Diskussionsgrundlage für den interdisziplinären fachlichen Austausch dienen, der weitere Arbeiten folgen müssen. Es wird versucht, Wissen aus den Bereichen Archivwesen und Bibliothekswesen zu kumulieren und zu kommentieren. Vollständigkeit wird nicht beansprucht, sondern Wert auf Beispielhaftigkeit gelegt, zumal der rasante Technologiewandel zwangsläufig eine rasche Veralterung technischer Angaben zu den elektronischen Informationsträgern zur Folge hat. Bestand haben jedoch die theoretischen Überlegungen und abstrakten Betrachtungen sowie die getroffenen Aussagen zur Addition, Integration und Separation der Informationswissenschaften. In der Arbeit werden in dem Kapitel "Die Informationsgesellschaft" vorrangig die Auswirkungen der Informationsgesellschaft auf die Archive und Bibliotheken untersucht, wobei zunächst von der Klärung der Begriffe "Information" und "Informationsgesellschaft" und der Informationspolitik in der EU und in der Bundesrepublik ausgegangen wird.
    Footnote
    Rez. in: Bibliothek: Forschung und Praxis 28(2004) H.1, S.132-135 (H. Flachmann)
    Form
    Elektronische Dokumente
    RSWK
    Archiv / Bestandserschließung / Elektronische Medien (BVB)
    Bibliothek / Bestandserschließung / Elektronische Medien (BVB)
    Deutschland / Archivbestand / Bibliotheksbestand / Bestandserschließung / Harmonisierung / Elektronische Medien (BVB)
    Subject
    Archiv / Bestandserschließung / Elektronische Medien (BVB)
    Bibliothek / Bestandserschließung / Elektronische Medien (BVB)
    Deutschland / Archivbestand / Bibliotheksbestand / Bestandserschließung / Harmonisierung / Elektronische Medien (BVB)
  20. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.03
    0.02505692 = product of:
      0.08769921 = sum of:
        0.026445258 = weight(_text_:wide in 2281) [ClassicSimilarity], result of:
          0.026445258 = score(doc=2281,freq=2.0), product of:
            0.13505316 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030480823 = queryNorm
            0.1958137 = fieldWeight in 2281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.032080915 = weight(_text_:web in 2281) [ClassicSimilarity], result of:
          0.032080915 = score(doc=2281,freq=10.0), product of:
            0.09947448 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030480823 = queryNorm
            0.32250395 = fieldWeight in 2281, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.011741605 = weight(_text_:information in 2281) [ClassicSimilarity], result of:
          0.011741605 = score(doc=2281,freq=16.0), product of:
            0.053508412 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030480823 = queryNorm
            0.21943474 = fieldWeight in 2281, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.01743143 = weight(_text_:retrieval in 2281) [ClassicSimilarity], result of:
          0.01743143 = score(doc=2281,freq=4.0), product of:
            0.092201896 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030480823 = queryNorm
            0.18905719 = fieldWeight in 2281, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
      0.2857143 = coord(4/14)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.

Languages

  • d 287
  • e 40
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 21
  • m 14
  • r 2
  • a 1
  • More… Less…

Themes

Subjects

Classifications