Search (69 results, page 1 of 4)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.39
    0.3917228 = product of:
      1.0968238 = sum of:
        0.095375985 = product of:
          0.28612795 = sum of:
            0.28612795 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.28612795 = score(doc=973,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.28612795 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.28612795 = score(doc=973,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.14306398 = product of:
          0.28612795 = sum of:
            0.28612795 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.28612795 = score(doc=973,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.5 = coord(1/2)
        0.28612795 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.28612795 = score(doc=973,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.28612795 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.28612795 = score(doc=973,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.35714287 = coord(5/14)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.25
    0.24532785 = product of:
      0.4906557 = sum of:
        0.039739996 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.11921998 = score(doc=4997,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.11921998 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.11921998 = score(doc=4997,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.016822865 = weight(_text_:classification in 4997) [ClassicSimilarity], result of:
          0.016822865 = score(doc=4997,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.05960999 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.11921998 = score(doc=4997,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
        0.11921998 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.11921998 = score(doc=4997,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.016822865 = weight(_text_:classification in 4997) [ClassicSimilarity], result of:
          0.016822865 = score(doc=4997,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.11921998 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.11921998 = score(doc=4997,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(7/14)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.22
    0.21602866 = product of:
      0.5040669 = sum of:
        0.031791996 = product of:
          0.095375985 = sum of:
            0.095375985 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.095375985 = score(doc=5820,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.13488202 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13488202 = score(doc=5820,freq=4.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.047687992 = product of:
          0.095375985 = sum of:
            0.095375985 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.095375985 = score(doc=5820,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(1/2)
        0.13488202 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13488202 = score(doc=5820,freq=4.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13488202 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13488202 = score(doc=5820,freq=4.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.019940836 = product of:
          0.039881673 = sum of:
            0.039881673 = weight(_text_:texts in 5820) [ClassicSimilarity], result of:
              0.039881673 = score(doc=5820,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2422848 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(1/2)
      0.42857143 = coord(6/14)
    
    Abstract
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.21
    0.2064732 = product of:
      0.4817708 = sum of:
        0.14306398 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14306398 = score(doc=563,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.02018744 = weight(_text_:classification in 563) [ClassicSimilarity], result of:
          0.02018744 = score(doc=563,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14306398 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14306398 = score(doc=563,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.02018744 = weight(_text_:classification in 563) [ClassicSimilarity], result of:
          0.02018744 = score(doc=563,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14306398 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14306398 = score(doc=563,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.024408007 = score(doc=563,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.42857143 = coord(6/14)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  5. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.16
    0.16321784 = product of:
      0.45700994 = sum of:
        0.039739996 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.11921998 = score(doc=4388,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.11921998 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.11921998 = score(doc=4388,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.05960999 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.11921998 = score(doc=4388,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.5 = coord(1/2)
        0.11921998 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.11921998 = score(doc=4388,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.11921998 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.11921998 = score(doc=4388,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.35714287 = coord(5/14)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  6. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.16
    0.16321784 = product of:
      0.45700994 = sum of:
        0.039739996 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.11921998 = score(doc=855,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.11921998 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.11921998 = score(doc=855,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.05960999 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.11921998 = score(doc=855,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.5 = coord(1/2)
        0.11921998 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.11921998 = score(doc=855,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.11921998 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.11921998 = score(doc=855,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.35714287 = coord(5/14)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.16
    0.16321784 = product of:
      0.45700994 = sum of:
        0.039739996 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.11921998 = score(doc=1000,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.11921998 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11921998 = score(doc=1000,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.05960999 = product of:
          0.11921998 = sum of:
            0.11921998 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.11921998 = score(doc=1000,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
        0.11921998 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11921998 = score(doc=1000,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11921998 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11921998 = score(doc=1000,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.35714287 = coord(5/14)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.13
    0.13057427 = product of:
      0.36560795 = sum of:
        0.031791996 = product of:
          0.095375985 = sum of:
            0.095375985 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.095375985 = score(doc=701,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.095375985 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.095375985 = score(doc=701,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.047687992 = product of:
          0.095375985 = sum of:
            0.095375985 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.095375985 = score(doc=701,freq=2.0), product of:
                0.25455406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03002521 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
        0.095375985 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.095375985 = score(doc=701,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.095375985 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.095375985 = score(doc=701,freq=2.0), product of:
            0.25455406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03002521 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.35714287 = coord(5/14)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  9. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.03
    0.030741926 = product of:
      0.10759674 = sum of:
        0.02546139 = weight(_text_:subject in 4408) [ClassicSimilarity], result of:
          0.02546139 = score(doc=4408,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 4408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4408)
        0.03496567 = weight(_text_:classification in 4408) [ClassicSimilarity], result of:
          0.03496567 = score(doc=4408,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3656675 = fieldWeight in 4408, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4408)
        0.03496567 = weight(_text_:classification in 4408) [ClassicSimilarity], result of:
          0.03496567 = score(doc=4408,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3656675 = fieldWeight in 4408, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=4408)
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
              0.024408007 = score(doc=4408,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 4408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    Unser Projekt soll eine Kreuzkonkordanz zwischen den Universalklassifikationen wie der "Regensburger Verbundsklassifikation (RVK)" und der "Basisklassifikation (BK)" sowie den Fachklassifikationen "Mathematics Subject Classification (MSC2010)" und "Physics and Astronomy Classification Scheme (PACS2010)" in den Fachgebieten Mathematik und Physik herstellen. Fazit: "Die klassifikatorische Übereinstmmung zwischen Regensburger Verbundklassifikation und Physics and Astronomy Classification Scheme war in einzelnen Fachbereichen (z. B. Kernphysik) recht gut. Doch andere Fachbereiche (z.B. Polymerphysik, Mineralogie) stimmten sehr wenig überein. Insgesamt konnten wir 890 einfache Verbindungen erstellen. Mehrfachverbindungen wurden aus technischen Gründen nicht mitgezählt. Das Projekt war insgesamt sehr umfangreich, daher konnte es im Rahmen der zwanzig Projekttage nicht erschöpfend behandelt werden. Eine Weiterentwicklung, insbesondere hinsichtlich des kollektiven Zuganges in Form eines Webformulars und der automatischen Klassifizierung erscheint jedoch sinnvoll."
    Pages
    22 S
  10. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.02
    0.01956973 = product of:
      0.09132541 = sum of:
        0.035607297 = weight(_text_:classification in 2191) [ClassicSimilarity], result of:
          0.035607297 = score(doc=2191,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.37237754 = fieldWeight in 2191, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.020110816 = weight(_text_:bibliographic in 2191) [ClassicSimilarity], result of:
          0.020110816 = score(doc=2191,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.17204987 = fieldWeight in 2191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.035607297 = weight(_text_:classification in 2191) [ClassicSimilarity], result of:
          0.035607297 = score(doc=2191,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.37237754 = fieldWeight in 2191, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
      0.21428572 = coord(3/14)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
  11. Adams, B.: Charles Ami Cutters 'Expansive classification' : eine kritsche Darstellung (1965) 0.02
    0.019032901 = product of:
      0.1332303 = sum of:
        0.06661515 = weight(_text_:classification in 4943) [ClassicSimilarity], result of:
          0.06661515 = score(doc=4943,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.69665456 = fieldWeight in 4943, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.109375 = fieldNorm(doc=4943)
        0.06661515 = weight(_text_:classification in 4943) [ClassicSimilarity], result of:
          0.06661515 = score(doc=4943,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.69665456 = fieldWeight in 4943, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.109375 = fieldNorm(doc=4943)
      0.14285715 = coord(2/14)
    
    Object
    Cutter expansive classification
  12. Wille, J.: Automatisches Klassifizieren bibliographischer Beschreibungsdaten : Vorgehensweise und Ergebnisse (2006) 0.02
    0.017635275 = product of:
      0.08229795 = sum of:
        0.023552012 = weight(_text_:classification in 6090) [ClassicSimilarity], result of:
          0.023552012 = score(doc=6090,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 6090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6090)
        0.035193928 = weight(_text_:bibliographic in 6090) [ClassicSimilarity], result of:
          0.035193928 = score(doc=6090,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.30108726 = fieldWeight in 6090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6090)
        0.023552012 = weight(_text_:classification in 6090) [ClassicSimilarity], result of:
          0.023552012 = score(doc=6090,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 6090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6090)
      0.21428572 = coord(3/14)
    
    Abstract
    Diese Arbeit befasst sich mit den praktischen Aspekten des Automatischen Klassifizierens bibliographischer Referenzdaten. Im Vordergrund steht die konkrete Vorgehensweise anhand des eigens zu diesem Zweck entwickelten Open Source-Programms COBRA "Classification Of Bibliographic Records, Automatic". Es werden die Rahmenbedingungen und Parameter f¨ur einen Einsatz im bibliothekarischen Umfeld geklärt. Schließlich erfolgt eine Auswertung von Klassifizierungsergebnissen am Beispiel sozialwissenschaftlicher Daten aus der Datenbank SOLIS.
  13. Engbarth, M.: ¬Die Library of Congress Classification : Geschichte, Struktur, Verbreitung und Auswirkungen auf deutsche Bibliotheksklassifikationen (1980) 0.02
    0.015380906 = product of:
      0.107666336 = sum of:
        0.053833168 = weight(_text_:classification in 6784) [ClassicSimilarity], result of:
          0.053833168 = score(doc=6784,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 6784, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.125 = fieldNorm(doc=6784)
        0.053833168 = weight(_text_:classification in 6784) [ClassicSimilarity], result of:
          0.053833168 = score(doc=6784,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 6784, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.125 = fieldNorm(doc=6784)
      0.14285715 = coord(2/14)
    
  14. Urban, A.: ¬Die Dewey Decimal Classification als Normklassifikation : Untersuchungen zur Entwicklung und Verbreitung der DDC unter besonderer Berücksichtigung der zentralen Sacherschließung (1977) 0.02
    0.015380906 = product of:
      0.107666336 = sum of:
        0.053833168 = weight(_text_:classification in 6824) [ClassicSimilarity], result of:
          0.053833168 = score(doc=6824,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 6824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.125 = fieldNorm(doc=6824)
        0.053833168 = weight(_text_:classification in 6824) [ClassicSimilarity], result of:
          0.053833168 = score(doc=6824,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5629819 = fieldWeight in 6824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.125 = fieldNorm(doc=6824)
      0.14285715 = coord(2/14)
    
  15. Thielemann, A.: Sacherschließung für die Kunstgeschichte : Möglichkeiten und Grenzen von DDC 700: The Arts (2007) 0.02
    0.015022537 = product of:
      0.07010517 = sum of:
        0.026916584 = weight(_text_:classification in 1409) [ClassicSimilarity], result of:
          0.026916584 = score(doc=1409,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 1409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=1409)
        0.026916584 = weight(_text_:classification in 1409) [ClassicSimilarity], result of:
          0.026916584 = score(doc=1409,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.28149095 = fieldWeight in 1409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0625 = fieldNorm(doc=1409)
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 1409) [ClassicSimilarity], result of:
              0.03254401 = score(doc=1409,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 1409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1409)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Nach der Veröffentlichung einer deutschen Übersetzung der Dewey Decimal Classification 22 im Oktober 2005 und ihrer Nutzung zur Inhaltserschließung in der Deutschen Nationalbibliographie seit Januar 2006 stellt sich aus Sicht der deutschen kunsthistorischen Spezialbibliotheken die Frage nach einer möglichen Verwendung der DDC und ihrer generellen Eignung zur Inhalterschließung kunsthistorischer Publikationen. Diese Frage wird vor dem Hintergrund der bestehenden bibliothekarischen Strukturen für die Kunstgeschichte sowie mit Blick auf die inhaltlichen Besonderheiten, die Forschungsmethodik und die publizistischen Traditionen dieses Faches erörtert.
  16. Engbarth, M.: ¬Die Library of Congress Classification : Geschichte, Struktur, Verbreitung und Auswirkungen auf deutsche Bibliotheksklassifikationen (1980) 0.01
    0.013458293 = product of:
      0.09420805 = sum of:
        0.047104023 = weight(_text_:classification in 4954) [ClassicSimilarity], result of:
          0.047104023 = score(doc=4954,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 4954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.109375 = fieldNorm(doc=4954)
        0.047104023 = weight(_text_:classification in 4954) [ClassicSimilarity], result of:
          0.047104023 = score(doc=4954,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 4954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.109375 = fieldNorm(doc=4954)
      0.14285715 = coord(2/14)
    
  17. Haßelmeier, B.: ¬Die Dewey Decimal Classification : Eine Einführung im Zusammenhang mit dem Projekt "DDC Deutsch" (2004) 0.01
    0.013458293 = product of:
      0.09420805 = sum of:
        0.047104023 = weight(_text_:classification in 2880) [ClassicSimilarity], result of:
          0.047104023 = score(doc=2880,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 2880, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2880)
        0.047104023 = weight(_text_:classification in 2880) [ClassicSimilarity], result of:
          0.047104023 = score(doc=2880,freq=8.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.49260917 = fieldWeight in 2880, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2880)
      0.14285715 = coord(2/14)
    
    Abstract
    Diese Arbeit möchte im Zusammenhang mit dem Übersetzungsprojekt "Dewey Decimal Classification Deutsch" eine Einführung zur Entstehung, Anwendung, Struktur und Erscheinungsweise des amerikanischen Originals geben. Zunächst werden einige grundlegende Begriffe der allgemeinen Klassifikationslehre erläutert. Anschließend werden Funktion und Bedeutung von Klassifikationen für den Bibliotheks- und Dokumentationsbereich skizziert. Im dritten Kapitel wird dann ausführlich auf die Dewey Decimal Classification eingegangen, bevor im letzten Kapitel dargestellt wird, welche Ziele mit dem Projekt "DDC Deutsch" verfolgt werden und welche Schwierigkeiten und Herausforderungen mit der Übersetzung verbunden sind. Die Arbeit schließt mit einem Ausblick auf die Möglichkeiten, die sich für das Bibliotheks- und Dokumentationswesen in Deutschland durch eine Übersetzung der Dewey Decimal Classification eröffnen.
  18. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.01
    0.010761541 = product of:
      0.07533079 = sum of:
        0.05092278 = weight(_text_:subject in 4865) [ClassicSimilarity], result of:
          0.05092278 = score(doc=4865,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4741941 = fieldWeight in 4865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.09375 = fieldNorm(doc=4865)
        0.024408007 = product of:
          0.048816014 = sum of:
            0.048816014 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.048816014 = score(doc=4865,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    22. 6.2002 19:41:59
  19. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.01
    0.010173514 = product of:
      0.071214594 = sum of:
        0.035607297 = weight(_text_:classification in 1742) [ClassicSimilarity], result of:
          0.035607297 = score(doc=1742,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.37237754 = fieldWeight in 1742, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
        0.035607297 = weight(_text_:classification in 1742) [ClassicSimilarity], result of:
          0.035607297 = score(doc=1742,freq=14.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.37237754 = fieldWeight in 1742, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
      0.14285715 = coord(2/14)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
  20. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.01
    0.009839063 = product of:
      0.045915626 = sum of:
        0.013458292 = weight(_text_:classification in 2281) [ClassicSimilarity], result of:
          0.013458292 = score(doc=2281,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.14074548 = fieldWeight in 2281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.01899904 = product of:
          0.03799808 = sum of:
            0.03799808 = weight(_text_:schemes in 2281) [ClassicSimilarity], result of:
              0.03799808 = score(doc=2281,freq=2.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2364941 = fieldWeight in 2281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2281)
          0.5 = coord(1/2)
        0.013458292 = weight(_text_:classification in 2281) [ClassicSimilarity], result of:
          0.013458292 = score(doc=2281,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.14074548 = fieldWeight in 2281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
      0.21428572 = coord(3/14)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.

Languages

  • d 46
  • e 21
  • f 1
  • hu 1
  • More… Less…