Search (538 results, page 1 of 27)

  • × type_ss:"x"
  1. Hartwieg, U.: ¬Die nationalbibliographische Situation im 18. Jahrhundert : Vorüberlegungen zur Verzeichnung der deutschen Drucke in einem VD18 (1999) 0.05
    0.051014483 = product of:
      0.08502413 = sum of:
        0.005572672 = weight(_text_:s in 3813) [ClassicSimilarity], result of:
          0.005572672 = score(doc=3813,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 3813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=3813)
        0.050546348 = weight(_text_:u in 3813) [ClassicSimilarity], result of:
          0.050546348 = score(doc=3813,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.43413407 = fieldWeight in 3813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=3813)
        0.028905109 = product of:
          0.057810217 = sum of:
            0.057810217 = weight(_text_:22 in 3813) [ClassicSimilarity], result of:
              0.057810217 = score(doc=3813,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.46428138 = fieldWeight in 3813, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3813)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Date
    18. 6.1999 9:22:36
    Pages
    xxx S
  2. Babiak, U.: Sacherschließungsmethoden im Internet (1994) 0.03
    0.029930148 = product of:
      0.07482537 = sum of:
        0.00743023 = weight(_text_:s in 8126) [ClassicSimilarity], result of:
          0.00743023 = score(doc=8126,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.19219826 = fieldWeight in 8126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=8126)
        0.067395136 = weight(_text_:u in 8126) [ClassicSimilarity], result of:
          0.067395136 = score(doc=8126,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.57884544 = fieldWeight in 8126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.125 = fieldNorm(doc=8126)
      0.4 = coord(2/5)
    
    Pages
    40 S
  3. Steierwald, U.: Wissen und System : zu Gottfried Wilhelm Leibniz' Theorie einer Universalbibliothek (1994) 0.03
    0.029930148 = product of:
      0.07482537 = sum of:
        0.00743023 = weight(_text_:s in 8794) [ClassicSimilarity], result of:
          0.00743023 = score(doc=8794,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.19219826 = fieldWeight in 8794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=8794)
        0.067395136 = weight(_text_:u in 8794) [ClassicSimilarity], result of:
          0.067395136 = score(doc=8794,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.57884544 = fieldWeight in 8794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.125 = fieldNorm(doc=8794)
      0.4 = coord(2/5)
    
    Pages
    116 S
  4. Ribbert, U.: Terminologiekontrolle in der Schlagwortnormdatei (1989) 0.03
    0.026188878 = product of:
      0.06547219 = sum of:
        0.0065014507 = weight(_text_:s in 642) [ClassicSimilarity], result of:
          0.0065014507 = score(doc=642,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16817348 = fieldWeight in 642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=642)
        0.058970742 = weight(_text_:u in 642) [ClassicSimilarity], result of:
          0.058970742 = score(doc=642,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.50648975 = fieldWeight in 642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.109375 = fieldNorm(doc=642)
      0.4 = coord(2/5)
    
    Footnote
    Hausarbeit. - Zusammenfassung erschienen in: Bibliothek: Forschung und Praxis 16(1992) S.9-25.
  5. Mühlschlegel, U.: Elektronische Wörterbücher in Bibliotheken : eine Alternative zu den Druckausgaben? (2001) 0.03
    0.026188878 = product of:
      0.06547219 = sum of:
        0.0065014507 = weight(_text_:s in 5733) [ClassicSimilarity], result of:
          0.0065014507 = score(doc=5733,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16817348 = fieldWeight in 5733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=5733)
        0.058970742 = weight(_text_:u in 5733) [ClassicSimilarity], result of:
          0.058970742 = score(doc=5733,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.50648975 = fieldWeight in 5733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.109375 = fieldNorm(doc=5733)
      0.4 = coord(2/5)
    
    Pages
    xxx S
  6. Roll, U.: Informationsangebot und Informationsnachfrage zu ausgewählten Gebieten der Ökologie und des Umweltschutzes (2004) 0.03
    0.026188878 = product of:
      0.06547219 = sum of:
        0.0065014507 = weight(_text_:s in 4631) [ClassicSimilarity], result of:
          0.0065014507 = score(doc=4631,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16817348 = fieldWeight in 4631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=4631)
        0.058970742 = weight(_text_:u in 4631) [ClassicSimilarity], result of:
          0.058970742 = score(doc=4631,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.50648975 = fieldWeight in 4631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.109375 = fieldNorm(doc=4631)
      0.4 = coord(2/5)
    
    Pages
    VI, 78 S
  7. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.02
    0.024818813 = product of:
      0.06204703 = sum of:
        0.056474358 = product of:
          0.33884615 = sum of:
            0.33884615 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.33884615 = score(doc=973,freq=2.0), product of:
                0.30145487 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035557263 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.16666667 = coord(1/6)
        0.005572672 = weight(_text_:s in 973) [ClassicSimilarity], result of:
          0.005572672 = score(doc=973,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
    Pages
    222 S
  8. Scholle, U.: Zu benutzerorientierten Informationsdienstleistungen an Bibliotheken : unter Berücksichtigung des Projekts 'Gewalt an Schulen' der Stadtbibliothek Köln als Beispiel einer aktiven Literaturvermittlung (1997) 0.02
    0.022447608 = product of:
      0.05611902 = sum of:
        0.005572672 = weight(_text_:s in 782) [ClassicSimilarity], result of:
          0.005572672 = score(doc=782,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=782)
        0.050546348 = weight(_text_:u in 782) [ClassicSimilarity], result of:
          0.050546348 = score(doc=782,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.43413407 = fieldWeight in 782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=782)
      0.4 = coord(2/5)
    
    Pages
    xxx S
  9. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.02
    0.022125818 = product of:
      0.03687636 = sum of:
        0.003715115 = weight(_text_:s in 4204) [ClassicSimilarity], result of:
          0.003715115 = score(doc=4204,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 4204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=4204)
        0.005909249 = weight(_text_:a in 4204) [ClassicSimilarity], result of:
          0.005909249 = score(doc=4204,freq=4.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.14413087 = fieldWeight in 4204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4204)
        0.027251998 = product of:
          0.054503996 = sum of:
            0.054503996 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.054503996 = score(doc=4204,freq=4.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Description of an experimental trend-predicting exercise covering a time period as far as 50 years into the future. The Delphi technique is used in soliciting the opinions of experts in six areas: scientific breakthroughs, population growth, automation, space progress, probability and prevention of war, and future weapon systems. Possible objections to the approach are also discussed.
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
    Pages
    xi, 65 S
  10. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.02
    0.020939754 = product of:
      0.03489959 = sum of:
        0.023530986 = product of:
          0.14118591 = sum of:
            0.14118591 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.14118591 = score(doc=4997,freq=2.0), product of:
                0.30145487 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035557263 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.16666667 = coord(1/6)
        0.002321947 = weight(_text_:s in 4997) [ClassicSimilarity], result of:
          0.002321947 = score(doc=4997,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.060061958 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.009046654 = weight(_text_:a in 4997) [ClassicSimilarity], result of:
          0.009046654 = score(doc=4997,freq=24.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.22065444 = fieldWeight in 4997, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6 = coord(3/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Pages
    IVX, 140 S
  11. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.02
    0.01934993 = product of:
      0.032249883 = sum of:
        0.023530986 = product of:
          0.14118591 = sum of:
            0.14118591 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.14118591 = score(doc=855,freq=2.0), product of:
                0.30145487 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035557263 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.16666667 = coord(1/6)
        0.002321947 = weight(_text_:s in 855) [ClassicSimilarity], result of:
          0.002321947 = score(doc=855,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.060061958 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.00639695 = weight(_text_:a in 855) [ClassicSimilarity], result of:
          0.00639695 = score(doc=855,freq=12.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.15602624 = fieldWeight in 855, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.6 = coord(3/5)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
    Pages
    48 S.
  12. Krömmelbein, U.: linguistische und fachwissenschaftliche Gesichtspunkte. Eine vergleichende Untersuchung der Regeln für die Schlagwortvergabe der Deutschen Bibliothek, RSWK, Voll-PRECIS und Kurz-PRECIS : Schlagwort-Syntax (1983) 0.02
    0.018706342 = product of:
      0.046765853 = sum of:
        0.004643894 = weight(_text_:s in 2566) [ClassicSimilarity], result of:
          0.004643894 = score(doc=2566,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.120123915 = fieldWeight in 2566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=2566)
        0.042121958 = weight(_text_:u in 2566) [ClassicSimilarity], result of:
          0.042121958 = score(doc=2566,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.3617784 = fieldWeight in 2566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=2566)
      0.4 = coord(2/5)
    
    Footnote
    Examensarbeit Höherer Dienst an der FHBD in Köln. - Auch veröffentlicht in: Bibliothek: Forschung und Praxis 8(1984) S.159-203
  13. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.017727733 = product of:
      0.02954622 = sum of:
        0.018824788 = product of:
          0.11294872 = sum of:
            0.11294872 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.11294872 = score(doc=701,freq=2.0), product of:
                0.30145487 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035557263 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.16666667 = coord(1/6)
        0.0018575575 = weight(_text_:s in 701) [ClassicSimilarity], result of:
          0.0018575575 = score(doc=701,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.048049565 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.008863874 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.008863874 = score(doc=701,freq=36.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6 = coord(3/5)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Pages
    249 S
  14. Engel, F.: Expertensuche in semantisch integrierten Datenbeständen (2015) 0.02
    0.017183978 = product of:
      0.028639961 = sum of:
        0.0018575575 = weight(_text_:s in 2283) [ClassicSimilarity], result of:
          0.0018575575 = score(doc=2283,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.048049565 = fieldWeight in 2283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=2283)
        0.023827778 = weight(_text_:u in 2283) [ClassicSimilarity], result of:
          0.023827778 = score(doc=2283,freq=4.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.20465277 = fieldWeight in 2283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=2283)
        0.0029546246 = weight(_text_:a in 2283) [ClassicSimilarity], result of:
          0.0029546246 = score(doc=2283,freq=4.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.072065435 = fieldWeight in 2283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2283)
      0.6 = coord(3/5)
    
    Abstract
    Wissen ist das intellektuelle Kapital eines Unternehmens und der effektive Zugriff darauf entscheidend für die Anpassungsfähigkeit und Innovationskraft. Eine häufig angewandte Lösung für den erfolgreichen Zugriff auf diese Wissensressourcen ist die Umsetzung der Expertensuche in den Daten der verteilten Informationssysteme des Unternehmens. Aktuelle Expertensuchverfahren berücksichtigen zur Berechnung der Relevanz eines Kandidaten zumeist nur die Information aus Datenquellen (u. a. E-Mails oder Publikationen eines Kandidaten), über die eine Verbindung zwischen dem Thema der Frage und einem Kandidaten hergestellt werden kann. Die aus den Datenquellen gewonnene Information, fließt dann gewichtet in die Relevanzbewertung ein. Analysen aus dem Fachbereich Wissensmanagement zeigen jedoch, dass neben dem Themenbezug auch noch weitere Kriterien Einfluss auf die Auswahl eines Experten in einer Expertensuche haben können (u. a. der Bekanntheitsgrad zwischen dem Suchenden und Kandidat). Um eine optimale Gewichtung der unterschiedlichen Bestandteile und Quellen, aus denen sich die Berechnung der Relevanz speist, zu finden, werden in aktuellen Anwendungen zur Suche nach Dokumenten oder zur Suche im Web verschiedene Verfahren aus dem Umfeld des maschinellen Lernens eingesetzt. Jedoch existieren derzeit nur sehr wenige Arbeiten zur Beantwortung der Frage, wie gut sich diese Verfahren eignen um auch in der Expertensuche verschiedene Bestandteile der Relevanzbestimmung optimal zusammenzuführen. Informationssysteme eines Unternehmens können komplex sein und auf einer verteilten Datenhaltung basieren. Zunehmend finden Technologien aus dem Umfeld des Semantic Web Akzeptanz in Unternehmen, um eine einheitliche Zugriffsschnittstelle auf den verteilten Datenbestand zu gewährleisten. Der Zugriff auf eine derartige Zugriffschnittstelle erfolgt dabei über Abfragesprachen, welche lediglich eine alphanumerische Sortierung der Rückgabe erlauben, jedoch keinen Rückschluss auf die Relevanz der gefundenen Objekte zulassen. Für die Suche nach Experten in einem derartig aufbereiteten Datenbestand bedarf es zusätzlicher Berechnungsverfahren, die einen Rückschluss auf den Relevanzwert eines Kandidaten ermöglichen. In dieser Arbeit soll zum einen ein Beitrag geleistet werden, der die Anwendbarkeit lernender Verfahren zur effektiven Aggregation unterschiedlicher Kriterien in der Suche nach Experten zeigt. Zum anderen soll in dieser Arbeit nach Möglichkeiten geforscht werden, wie die Relevanz eines Kandidaten über Zugriffsschnittstellen berechnet werden kann, die auf Technologien aus dem Umfeld des Semantic Web basieren.
    Pages
    XVI, 189 S
  15. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.02
    0.016413981 = product of:
      0.04103495 = sum of:
        0.007312323 = weight(_text_:a in 5309) [ClassicSimilarity], result of:
          0.007312323 = score(doc=5309,freq=2.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.17835285 = fieldWeight in 5309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=5309)
        0.033722628 = product of:
          0.067445256 = sum of:
            0.067445256 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.067445256 = score(doc=5309,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    5. 8.2006 13:07:22
  16. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.016373454 = product of:
      0.027289089 = sum of:
        0.018824788 = product of:
          0.11294872 = sum of:
            0.11294872 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.11294872 = score(doc=5820,freq=2.0), product of:
                0.30145487 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035557263 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.16666667 = coord(1/6)
        0.0018575575 = weight(_text_:s in 5820) [ClassicSimilarity], result of:
          0.0018575575 = score(doc=5820,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.048049565 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.006606742 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
          0.006606742 = score(doc=5820,freq=20.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.16114321 = fieldWeight in 5820, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6 = coord(3/5)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Pages
    iii, 82 S
  17. Thielemann, A.: Sacherschließung für die Kunstgeschichte : Möglichkeiten und Grenzen von DDC 700: The Arts (2007) 0.02
    0.016298195 = product of:
      0.027163656 = sum of:
        0.003715115 = weight(_text_:s in 1409) [ClassicSimilarity], result of:
          0.003715115 = score(doc=1409,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 1409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=1409)
        0.00417847 = weight(_text_:a in 1409) [ClassicSimilarity], result of:
          0.00417847 = score(doc=1409,freq=2.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.10191591 = fieldWeight in 1409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1409)
        0.019270072 = product of:
          0.038540144 = sum of:
            0.038540144 = weight(_text_:22 in 1409) [ClassicSimilarity], result of:
              0.038540144 = score(doc=1409,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.30952093 = fieldWeight in 1409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1409)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Nach der Veröffentlichung einer deutschen Übersetzung der Dewey Decimal Classification 22 im Oktober 2005 und ihrer Nutzung zur Inhaltserschließung in der Deutschen Nationalbibliographie seit Januar 2006 stellt sich aus Sicht der deutschen kunsthistorischen Spezialbibliotheken die Frage nach einer möglichen Verwendung der DDC und ihrer generellen Eignung zur Inhalterschließung kunsthistorischer Publikationen. Diese Frage wird vor dem Hintergrund der bestehenden bibliothekarischen Strukturen für die Kunstgeschichte sowie mit Blick auf die inhaltlichen Besonderheiten, die Forschungsmethodik und die publizistischen Traditionen dieses Faches erörtert.
    Pages
    78 S
  18. Menges, T.: Möglichkeiten und Grenzen der Übertragbarkeit eines Buches auf Hypertext am Beispiel einer französischen Grundgrammatik (Klein; Kleineidam) (1997) 0.02
    0.016089631 = product of:
      0.04022408 = sum of:
        0.0065014507 = weight(_text_:s in 1496) [ClassicSimilarity], result of:
          0.0065014507 = score(doc=1496,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16817348 = fieldWeight in 1496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=1496)
        0.033722628 = product of:
          0.067445256 = sum of:
            0.067445256 = weight(_text_:22 in 1496) [ClassicSimilarity], result of:
              0.067445256 = score(doc=1496,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.5416616 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1496)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 7.1998 18:23:25
    Pages
    II,51 S
  19. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.02
    0.016089631 = product of:
      0.04022408 = sum of:
        0.0065014507 = weight(_text_:s in 4635) [ClassicSimilarity], result of:
          0.0065014507 = score(doc=4635,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16817348 = fieldWeight in 4635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=4635)
        0.033722628 = product of:
          0.067445256 = sum of:
            0.067445256 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.067445256 = score(doc=4635,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    26.11.2005 18:39:22
    Pages
    49 S
  20. Reinke, U.: ¬Der Austausch terminologischer Daten (1993) 0.01
    0.014965074 = product of:
      0.037412684 = sum of:
        0.003715115 = weight(_text_:s in 4608) [ClassicSimilarity], result of:
          0.003715115 = score(doc=4608,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=4608)
        0.033697568 = weight(_text_:u in 4608) [ClassicSimilarity], result of:
          0.033697568 = score(doc=4608,freq=2.0), product of:
            0.116430275 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.035557263 = queryNorm
            0.28942272 = fieldWeight in 4608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=4608)
      0.4 = coord(2/5)
    
    Pages
    148 S

Languages

  • d 483
  • e 45
  • f 3
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • el 29
  • m 22
  • r 2
  • a 1
  • More… Less…

Themes

Subjects

Classifications