Search (86 results, page 1 of 5)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.42
    0.42478722 = product of:
      0.84957445 = sum of:
        0.121367775 = product of:
          0.36410332 = sum of:
            0.36410332 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.36410332 = score(doc=973,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.36410332 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.36410332 = score(doc=973,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.36410332 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.36410332 = score(doc=973,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.29
    0.29415226 = product of:
      0.44122836 = sum of:
        0.18205166 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18205166 = score(doc=563,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.18205166 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18205166 = score(doc=563,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06677184 = weight(_text_:propose in 563) [ClassicSimilarity], result of:
          0.06677184 = score(doc=563,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.031059656 = score(doc=563,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.29
    0.2855003 = product of:
      0.4282504 = sum of:
        0.040455926 = product of:
          0.121367775 = sum of:
            0.121367775 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.121367775 = score(doc=5820,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17163995 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17163995 = score(doc=5820,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.17163995 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17163995 = score(doc=5820,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.044514563 = weight(_text_:propose in 5820) [ClassicSimilarity], result of:
          0.044514563 = score(doc=5820,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.22691247 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(4/6)
    
    Abstract
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.27
    0.2730884 = product of:
      0.40963256 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.15170972 = score(doc=4997,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.15170972 = score(doc=4997,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.15170972 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.15170972 = score(doc=4997,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.055643205 = weight(_text_:propose in 4997) [ClassicSimilarity], result of:
          0.055643205 = score(doc=4997,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6666667 = coord(4/6)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  5. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.18
    0.17699468 = product of:
      0.35398936 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15170972 = score(doc=4388,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15170972 = score(doc=4388,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.15170972 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15170972 = score(doc=4388,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(3/6)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  6. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.18
    0.17699468 = product of:
      0.35398936 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.15170972 = score(doc=855,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.15170972 = score(doc=855,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.15170972 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.15170972 = score(doc=855,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.5 = coord(3/6)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.18
    0.17699468 = product of:
      0.35398936 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15170972 = score(doc=1000,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15170972 = score(doc=1000,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.15170972 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15170972 = score(doc=1000,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(3/6)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.14
    0.14159574 = product of:
      0.28319147 = sum of:
        0.040455926 = product of:
          0.121367775 = sum of:
            0.121367775 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.121367775 = score(doc=701,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.121367775 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.121367775 = score(doc=701,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.121367775 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.121367775 = score(doc=701,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  9. Krömmelbein, U.: linguistische und fachwissenschaftliche Gesichtspunkte. Eine vergleichende Untersuchung der Regeln für die Schlagwortvergabe der Deutschen Bibliothek, RSWK, Voll-PRECIS und Kurz-PRECIS : Schlagwort-Syntax (1983) 0.04
    0.039107356 = product of:
      0.117322065 = sum of:
        0.099910066 = weight(_text_:forschung in 2566) [ClassicSimilarity], result of:
          0.099910066 = score(doc=2566,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.5375043 = fieldWeight in 2566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.078125 = fieldNorm(doc=2566)
        0.017412 = product of:
          0.052235994 = sum of:
            0.052235994 = weight(_text_:29 in 2566) [ClassicSimilarity], result of:
              0.052235994 = score(doc=2566,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.38865322 = fieldWeight in 2566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2566)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    6. 1.1999 9:29:10
    Footnote
    Examensarbeit Höherer Dienst an der FHBD in Köln. - Auch veröffentlicht in: Bibliothek: Forschung und Praxis 8(1984) S.159-203
  10. Bickmann, H.-J.: Synonymie und Sprachverwendung : Verfahren zur Ermittlung von Synonymenklassen als kontextbeschränkten Äquivalenzklassen (1978) 0.03
    0.031285882 = product of:
      0.093857646 = sum of:
        0.07992805 = weight(_text_:forschung in 5890) [ClassicSimilarity], result of:
          0.07992805 = score(doc=5890,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.43000343 = fieldWeight in 5890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0625 = fieldNorm(doc=5890)
        0.013929598 = product of:
          0.041788794 = sum of:
            0.041788794 = weight(_text_:29 in 5890) [ClassicSimilarity], result of:
              0.041788794 = score(doc=5890,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.31092256 = fieldWeight in 5890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5890)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Content
    Enthält auf S.7-8 eine Zusammenfassung der Grundlagen der Falsifikation mit Bezug auf Poppers Logik der Forschung
    Date
    6. 6.2020 13:29:02
  11. Ribbert, U.: Terminologiekontrolle in der Schlagwortnormdatei (1989) 0.02
    0.023312349 = product of:
      0.13987409 = sum of:
        0.13987409 = weight(_text_:forschung in 642) [ClassicSimilarity], result of:
          0.13987409 = score(doc=642,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.752506 = fieldWeight in 642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.109375 = fieldNorm(doc=642)
      0.16666667 = coord(1/6)
    
    Footnote
    Hausarbeit. - Zusammenfassung erschienen in: Bibliothek: Forschung und Praxis 16(1992) S.9-25.
  12. Gröschel, P.: Prometheus: das verteilte digitale Bildarchiv für Forschung und Lehre : Die Zusammenführung von Ressourcen aus heterogenen Informationssystemen (2004) 0.02
    0.019982014 = product of:
      0.119892076 = sum of:
        0.119892076 = weight(_text_:forschung in 4528) [ClassicSimilarity], result of:
          0.119892076 = score(doc=4528,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.64500517 = fieldWeight in 4528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.09375 = fieldNorm(doc=4528)
      0.16666667 = coord(1/6)
    
  13. Stanz, G.: Medienarchive: Analyse einer unterschätzten Ressource : Archivierung, Dokumentation, und Informationsvermittlung in Medien bei besonderer Berücksichtigung von Pressearchiven (1994) 0.01
    0.013866945 = product of:
      0.08320167 = sum of:
        0.08320167 = product of:
          0.1248025 = sum of:
            0.06268319 = weight(_text_:29 in 9) [ClassicSimilarity], result of:
              0.06268319 = score(doc=9,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46638384 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
            0.062119313 = weight(_text_:22 in 9) [ClassicSimilarity], result of:
              0.062119313 = score(doc=9,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46428138 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 2.1997 19:50:29
  14. Walther, R.: Möglichkeiten und Grenzen automatischer Klassifikationen von Web-Dokumenten (2001) 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 1562) [ClassicSimilarity], result of:
          0.06993704 = score(doc=1562,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 1562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1562)
      0.16666667 = coord(1/6)
    
    Abstract
    Automatische Klassifikationen von Web- und andern Textdokumenten ermöglichen es, betriebsinterne und externe Informationen geordnet zugänglich zu machen. Die Forschung zur automatischen Klassifikation hat sich in den letzten Jahren intensiviert. Das Resultat sind verschiedenen Methoden, die heute in der Praxis einzeln oder kombiniert für die Klassifikation im Einsatz sind. In der vorliegenden Lizenziatsarbeit werden neben allgemeinen Grundsätzen einige Methoden zur automatischen Klassifikation genauer betrachtet und ihre Möglichkeiten und Grenzen erörtert. Daneben erfolgt die Präsentation der Resultate aus einer Umfrage bei Anbieterrfirmen von Softwarelösungen zur automatische Klassifikation von Text-Dokumenten. Die Ausführungen dienen der myax internet AG als Basis, ein eigenes Klassifikations-Produkt zu entwickeln
  15. Hilgers, C.; Maddi, Y.: Fachlexika als Online-Informationsspeicher : Konzeption und Erstellung einer Online-Datenbank zur Bewertung aktueller Trends aus Sicht der Lexikographie und der Usabilityforschung (2004) 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 3698) [ClassicSimilarity], result of:
          0.06993704 = score(doc=3698,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 3698, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3698)
      0.16666667 = coord(1/6)
    
    Abstract
    Ziel ist die Bewertung meist einsprachiger (weniger mehrsprachiger) Online-Fachlexika mit einschlägigen Usability- und Accessibility-Kriterien und solchen der Lexikographie. Nach Behandlung der theoretischen Grundlagen der Usability-Forschung, der Accessibility-Guidelines zur Nutzung von Websites und der Lexikographie, werden Bewertungskriterien (Heuristiken), Abläufe der heuristischen Evaluation (keine Benutzertests) und die Präsentation der Ergebnisse erarbeitet. Getestet werden im Internet frei zugängliche, kostenlose Fachlexika und Glossare und die Ergebnisse in einer entwickelten php/MySOL-Datenbank präsentiert, mit Empfehlungen für die bessere Gestaltung der Online-Nachschlagewerke.
  16. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.011128641 = product of:
      0.06677184 = sum of:
        0.06677184 = weight(_text_:propose in 3829) [ClassicSimilarity], result of:
          0.06677184 = score(doc=3829,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.16666667 = coord(1/6)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
  17. Pfeffer, M.: Automatische Vergabe von RVK-Notationen anhand von bibliografischen Daten mittels fallbasiertem Schließen (2007) 0.01
    0.009991007 = product of:
      0.059946038 = sum of:
        0.059946038 = weight(_text_:forschung in 558) [ClassicSimilarity], result of:
          0.059946038 = score(doc=558,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.32250258 = fieldWeight in 558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.046875 = fieldNorm(doc=558)
      0.16666667 = coord(1/6)
    
    Abstract
    Klassifikation von bibliografischen Einheiten ist für einen systematischen Zugang zu den Beständen einer Bibliothek und deren Aufstellung unumgänglich. Bislang wurde diese Aufgabe von Fachexperten manuell erledigt, sei es individuell nach einer selbst entwickelten Systematik oder kooperativ nach einer gemeinsamen Systematik. In dieser Arbeit wird ein Verfahren zur Automatisierung des Klassifikationsvorgangs vorgestellt. Dabei kommt das Verfahren des fallbasierten Schließens zum Einsatz, das im Kontext der Forschung zur künstlichen Intelligenz entwickelt wurde. Das Verfahren liefert für jedes Werk, für das bibliografische Daten vorliegen, eine oder mehrere mögliche Klassifikationen. In Experimenten werden die Ergebnisse der automatischen Klassifikation mit der durch Fachexperten verglichen. Diese Experimente belegen die hohe Qualität der automatischen Klassifikation und dass das Verfahren geeignet ist, Fachexperten bei der Klassifikationsarbeit signifikant zu entlasten. Auch die nahezu vollständige Resystematisierung eines Bibliothekskataloges ist - mit gewissen Abstrichen - möglich.
  18. Nimz, B.: ¬Die Erschließung im Archiv- und Bibliothekswesen unter besonderer Berücksichtigung elektronischer Informationsträger : ein Vergleich im Interesse der Professionalisierung und Harmonisierung (2001) 0.01
    0.009419611 = product of:
      0.056517664 = sum of:
        0.056517664 = weight(_text_:forschung in 2442) [ClassicSimilarity], result of:
          0.056517664 = score(doc=2442,freq=4.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.30405834 = fieldWeight in 2442, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.03125 = fieldNorm(doc=2442)
      0.16666667 = coord(1/6)
    
    Abstract
    Diese Arbeit dient der Professionalisierung und Harmonisierung der Erschließungsmethodik im Archiv- und Bibliothekswesen. Die Erschließung ist das Kernstück der archivarischen und bibliothekarischen Arbeit und Grundlage für die Benutzung durch die interessierte Öffentlichkeit. Hier wird, bildlich gesprochen, das gesät, was in der Benutzung geerntet wird und je gewissenhafter die Aussaat desto ertragreicher die Ernte. Der Bereich der Dokumentation wird, wo es für die Betrachtung der integrativen Momente in den Informationswissenschaften erforderlich erscheint, einbezogen. Das Hauptaugenmerk gilt jedoch der Archivwissenschaft und den Beziehungen zwischen der Archiv- und Bibliothekswissenschaft. Vornehmlich wird die Arbeit nationale Strukturen des Archivund Bibliothekswesens sowie ausgewählte Projekte und Tendenzen abhandeln. Auf eine erschöpfende Untersuchung aller Ausbildungsgänge auf dem Informationssektor wird verzichtet, da das Ziel dieser Arbeit nur die Betrachtung der integrativen Konzepte in der Ausbildung ist. Ziel der Publikation ist es, Angebote sowohl für die Grundlagenforschung als auch für die angewandte Forschung im Themenbereich Harmonisierung und Professionalisierung in der Erschließung der Informationswissenschaften zu machen. Sie kann als Diskussionsgrundlage für den interdisziplinären fachlichen Austausch dienen, der weitere Arbeiten folgen müssen. Es wird versucht, Wissen aus den Bereichen Archivwesen und Bibliothekswesen zu kumulieren und zu kommentieren. Vollständigkeit wird nicht beansprucht, sondern Wert auf Beispielhaftigkeit gelegt, zumal der rasante Technologiewandel zwangsläufig eine rasche Veralterung technischer Angaben zu den elektronischen Informationsträgern zur Folge hat. Bestand haben jedoch die theoretischen Überlegungen und abstrakten Betrachtungen sowie die getroffenen Aussagen zur Addition, Integration und Separation der Informationswissenschaften. In der Arbeit werden in dem Kapitel "Die Informationsgesellschaft" vorrangig die Auswirkungen der Informationsgesellschaft auf die Archive und Bibliotheken untersucht, wobei zunächst von der Klärung der Begriffe "Information" und "Informationsgesellschaft" und der Informationspolitik in der EU und in der Bundesrepublik ausgegangen wird.
    Footnote
    Rez. in: Bibliothek: Forschung und Praxis 28(2004) H.1, S.132-135 (H. Flachmann)
  19. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 3222) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3222,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
      0.16666667 = coord(1/6)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
  20. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 1210) [ClassicSimilarity], result of:
          0.055643205 = score(doc=1210,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.16666667 = coord(1/6)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.

Languages

  • d 68
  • e 15
  • f 1
  • hu 1
  • More… Less…

Types