Search (129 results, page 1 of 7)

  • × type_ss:"x"
  1. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.09
    0.09477363 = product of:
      0.21324067 = sum of:
        0.043655004 = product of:
          0.13096501 = sum of:
            0.13096501 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.13096501 = score(doc=4997,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.016671322 = weight(_text_:of in 4997) [ClassicSimilarity], result of:
          0.016671322 = score(doc=4997,freq=28.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.32322758 = fieldWeight in 4997, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.13096501 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.13096501 = score(doc=4997,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.021949332 = product of:
          0.043898664 = sum of:
            0.043898664 = weight(_text_:problems in 4997) [ClassicSimilarity], result of:
              0.043898664 = score(doc=4997,freq=4.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.322459 = fieldWeight in 4997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  2. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.09
    0.093130685 = product of:
      0.41908807 = sum of:
        0.10477202 = product of:
          0.31431603 = sum of:
            0.31431603 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.31431603 = score(doc=973,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.31431603 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.31431603 = score(doc=973,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.22222222 = coord(2/9)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  3. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.09
    0.08750699 = product of:
      0.19689071 = sum of:
        0.043655004 = product of:
          0.13096501 = sum of:
            0.13096501 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.13096501 = score(doc=1000,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.017815111 = weight(_text_:library in 1000) [ClassicSimilarity], result of:
          0.017815111 = score(doc=1000,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.2054202 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.0044555985 = weight(_text_:of in 1000) [ClassicSimilarity], result of:
          0.0044555985 = score(doc=1000,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.086386204 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.13096501 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.13096501 = score(doc=1000,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.44444445 = coord(4/9)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  4. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.07
    0.06563306 = product of:
      0.19689916 = sum of:
        0.034924004 = product of:
          0.10477201 = sum of:
            0.10477201 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.10477201 = score(doc=5820,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.013805167 = weight(_text_:of in 5820) [ClassicSimilarity], result of:
          0.013805167 = score(doc=5820,freq=30.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.26765788 = fieldWeight in 5820, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.14817 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.14817 = score(doc=5820,freq=4.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.33333334 = coord(3/9)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  5. Engbarth, M.: ¬Die Library of Congress Classification : Geschichte, Struktur, Verbreitung und Auswirkungen auf deutsche Bibliotheksklassifikationen (1980) 0.06
    0.06241683 = product of:
      0.18725048 = sum of:
        0.040310998 = weight(_text_:library in 6784) [ClassicSimilarity], result of:
          0.040310998 = score(doc=6784,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.46481284 = fieldWeight in 6784, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.125 = fieldNorm(doc=6784)
        0.014257914 = weight(_text_:of in 6784) [ClassicSimilarity], result of:
          0.014257914 = score(doc=6784,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.27643585 = fieldWeight in 6784, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=6784)
        0.13268156 = weight(_text_:congress in 6784) [ClassicSimilarity], result of:
          0.13268156 = score(doc=6784,freq=2.0), product of:
            0.15733992 = queryWeight, product of:
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.03298316 = queryNorm
            0.8432797 = fieldWeight in 6784, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.125 = fieldNorm(doc=6784)
      0.33333334 = coord(3/9)
    
  6. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.06
    0.06213614 = product of:
      0.18640842 = sum of:
        0.043655004 = product of:
          0.13096501 = sum of:
            0.13096501 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.13096501 = score(doc=855,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.0117884055 = weight(_text_:of in 855) [ClassicSimilarity], result of:
          0.0117884055 = score(doc=855,freq=14.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.22855641 = fieldWeight in 855, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.13096501 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.13096501 = score(doc=855,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.33333334 = coord(3/9)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  7. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.06
    0.061895706 = product of:
      0.18568711 = sum of:
        0.015122802 = weight(_text_:of in 563) [ClassicSimilarity], result of:
          0.015122802 = score(doc=563,freq=16.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2932045 = fieldWeight in 563, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.15715802 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15715802 = score(doc=563,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.013406289 = product of:
          0.026812578 = sum of:
            0.026812578 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.026812578 = score(doc=563,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
    Imprint
    Guelph, Ontario : University of Guelph
  8. Engbarth, M.: ¬Die Library of Congress Classification : Geschichte, Struktur, Verbreitung und Auswirkungen auf deutsche Bibliotheksklassifikationen (1980) 0.05
    0.054614723 = product of:
      0.16384417 = sum of:
        0.03527212 = weight(_text_:library in 4954) [ClassicSimilarity], result of:
          0.03527212 = score(doc=4954,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.40671125 = fieldWeight in 4954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=4954)
        0.012475675 = weight(_text_:of in 4954) [ClassicSimilarity], result of:
          0.012475675 = score(doc=4954,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24188137 = fieldWeight in 4954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=4954)
        0.11609636 = weight(_text_:congress in 4954) [ClassicSimilarity], result of:
          0.11609636 = score(doc=4954,freq=2.0), product of:
            0.15733992 = queryWeight, product of:
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.03298316 = queryNorm
            0.73786974 = fieldWeight in 4954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.109375 = fieldNorm(doc=4954)
      0.33333334 = coord(3/9)
    
  9. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.05
    0.051317975 = product of:
      0.15395392 = sum of:
        0.034924004 = product of:
          0.10477201 = sum of:
            0.10477201 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.10477201 = score(doc=701,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.014257914 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.014257914 = score(doc=701,freq=32.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.10477201 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.10477201 = score(doc=701,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.33333334 = coord(3/9)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  10. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.03880445 = product of:
      0.17462002 = sum of:
        0.043655004 = product of:
          0.13096501 = sum of:
            0.13096501 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.13096501 = score(doc=4388,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.13096501 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.13096501 = score(doc=4388,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.22222222 = coord(2/9)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  11. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.01
    0.014334334 = product of:
      0.043003 = sum of:
        0.01425209 = weight(_text_:library in 4728) [ClassicSimilarity], result of:
          0.01425209 = score(doc=4728,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.16433616 = fieldWeight in 4728, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=4728)
        0.016334493 = weight(_text_:of in 4728) [ClassicSimilarity], result of:
          0.016334493 = score(doc=4728,freq=42.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.31669703 = fieldWeight in 4728, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4728)
        0.012416417 = product of:
          0.024832834 = sum of:
            0.024832834 = weight(_text_:problems in 4728) [ClassicSimilarity], result of:
              0.024832834 = score(doc=4728,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.18241036 = fieldWeight in 4728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4728)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.
    Imprint
    Ann Arbor : ProQuest / University of Florida
  12. Reinke, U.: ¬Der Austausch terminologischer Daten (1993) 0.01
    0.012671867 = product of:
      0.057023402 = sum of:
        0.014257914 = weight(_text_:of in 4608) [ClassicSimilarity], result of:
          0.014257914 = score(doc=4608,freq=8.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.27643585 = fieldWeight in 4608, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4608)
        0.042765487 = product of:
          0.085530974 = sum of:
            0.085530974 = weight(_text_:etc in 4608) [ClassicSimilarity], result of:
              0.085530974 = score(doc=4608,freq=2.0), product of:
                0.17865302 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03298316 = queryNorm
                0.47875473 = fieldWeight in 4608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4608)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Diplomarbeit at the University of Saarbrücken which contains the following topics: data exchange format; terminology management systems; terminological databases; terminological record; data elements; data categories; data fields, etc.: hard- and software-related difficulties for the structure of records; description of approaches for the development of an exchange format for terminological data (MATER, MicroMATER, NTRF, SGML); considerations concerning an SGML-like exchange format; perspectives
  13. Grünberg, H.: ¬Die Sacherschließung auf der Grundlage der Regensburger Aufstellungssystematiken in einer wissenschaftlichen Spezialbibliothek : dargestellt am Beispiel der Fachbibliothek Geodäsie / Kartographie / Geographie an der Technischen Universität Dresden (1993) 0.01
    0.011347748 = product of:
      0.051064868 = sum of:
        0.035630222 = weight(_text_:library in 1559) [ClassicSimilarity], result of:
          0.035630222 = score(doc=1559,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.4108404 = fieldWeight in 1559, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=1559)
        0.015434646 = weight(_text_:of in 1559) [ClassicSimilarity], result of:
          0.015434646 = score(doc=1559,freq=6.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2992506 = fieldWeight in 1559, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=1559)
      0.22222222 = coord(2/9)
    
    Abstract
    The thesis showed how the classification system of the Regensburg library could be applied in the special library for geodesy, cartography, and geography of the Technical University of Dresden
  14. John, M.: ¬Die Sacherschließung auf der Grundlage der Regensburger Aufstellungssystematiken und der RSWK in einer wissenschaftlichen Spezialbibliothek (1993) 0.01
    0.011347748 = product of:
      0.051064868 = sum of:
        0.035630222 = weight(_text_:library in 5914) [ClassicSimilarity], result of:
          0.035630222 = score(doc=5914,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.4108404 = fieldWeight in 5914, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=5914)
        0.015434646 = weight(_text_:of in 5914) [ClassicSimilarity], result of:
          0.015434646 = score(doc=5914,freq=6.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2992506 = fieldWeight in 5914, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5914)
      0.22222222 = coord(2/9)
    
    Abstract
    The thesis showed how the classification system of the Regensburg library could be applied in the special library for chemistry of the Technical University of Dresden
  15. Küchler, J.: ¬Die Sacherschließung auf der Grundlage der Regensburger Aufstellungssystematiken in einer wissenschaftlichen Spezialbibliothek : dargestellt am Beispiel der Fachbibliothek Informatik der UB Dresden (1993) 0.01
    0.011347748 = product of:
      0.051064868 = sum of:
        0.035630222 = weight(_text_:library in 5916) [ClassicSimilarity], result of:
          0.035630222 = score(doc=5916,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.4108404 = fieldWeight in 5916, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=5916)
        0.015434646 = weight(_text_:of in 5916) [ClassicSimilarity], result of:
          0.015434646 = score(doc=5916,freq=6.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2992506 = fieldWeight in 5916, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5916)
      0.22222222 = coord(2/9)
    
    Abstract
    The thesis showed how the classification system of the Regensburg library could be applied in the special library for computer science of the Technical University of Dresden
  16. Müller, T.: Wissensrepräsentation mit semantischen Netzen im Bereich Luftfahrt (2006) 0.01
    0.010882582 = product of:
      0.09794323 = sum of:
        0.09794323 = sum of:
          0.07559942 = weight(_text_:etc in 1670) [ClassicSimilarity], result of:
            0.07559942 = score(doc=1670,freq=4.0), product of:
              0.17865302 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.03298316 = queryNorm
              0.4231634 = fieldWeight in 1670, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1670)
          0.022343816 = weight(_text_:22 in 1670) [ClassicSimilarity], result of:
            0.022343816 = score(doc=1670,freq=2.0), product of:
              0.11550141 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03298316 = queryNorm
              0.19345059 = fieldWeight in 1670, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1670)
      0.11111111 = coord(1/9)
    
    Abstract
    Es ist ein semantisches Netz für den Gegenstandsbereich Luftfahrt modelliert worden, welches Unternehmensinformationen, Organisationen, Fluglinien, Flughäfen, etc. enthält, Diese sind 10 Hauptkategorien zugeordnet worden, die untergliedert nach Facetten sind. Die Begriffe des Gegenstandsbereiches sind mit 23 unterschiedlichen Relationen verknüpft worden (Z. B.: 'hat Standort in', bietet an, 'ist Homebase von', etc). Der Schwerpunkt der Betrachtung liegt auf dem Unterschied zwischen den drei klassischen Standardrelationen und den zusätzlich eingerichteten Relationen, bezüglich ihrem Nutzen für ein effizientes Retrieval. Die angelegten Kategorien und Relationen sind sowohl für eine kognitive als auch für eine maschinelle Verarbeitung geeignet.
    Date
    26. 9.2006 21:00:22
  17. Müller, G.: ¬Die Sacherschließung auf der Grundlage der Regensburger Aufstellungssystematiken : dargestellt am Beispiel der Zweigbibliothek der Philosophie, Ästhetik und Kulturwissenschaft der Universitätsbibliothek der Humboldt Universität zu Berlin (1993) 0.01
    0.010718346 = product of:
      0.048232555 = sum of:
        0.035630222 = weight(_text_:library in 5917) [ClassicSimilarity], result of:
          0.035630222 = score(doc=5917,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.4108404 = fieldWeight in 5917, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=5917)
        0.012602335 = weight(_text_:of in 5917) [ClassicSimilarity], result of:
          0.012602335 = score(doc=5917,freq=4.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24433708 = fieldWeight in 5917, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5917)
      0.22222222 = coord(2/9)
    
    Abstract
    The thesis showed how the classification system of the Regensburg library could be applied in the university library of the Humboldt university (for philosophy, aesthetics and culture science)
  18. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.01
    0.00947894 = product of:
      0.04265523 = sum of:
        0.017822394 = weight(_text_:of in 1154) [ClassicSimilarity], result of:
          0.017822394 = score(doc=1154,freq=50.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.34554482 = fieldWeight in 1154, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.024832834 = product of:
          0.049665667 = sum of:
            0.049665667 = weight(_text_:problems in 1154) [ClassicSimilarity], result of:
              0.049665667 = score(doc=1154,freq=8.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.36482072 = fieldWeight in 1154, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Content
    A dissertation Presented to the Faculties of Roskilde University in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy. Vgl. unter: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117.987 oder http://coitweb.uncc.edu/~ras/RS/Onto-Retrieval.pdf.
  19. Geisriegler, E.: Enriching electronic texts with semantic metadata : a use case for the historical Newspaper Collection ANNO (Austrian Newspapers Online) of the Austrian National Libraryhek (2012) 0.01
    0.009408231 = product of:
      0.028224692 = sum of:
        0.0125971865 = weight(_text_:library in 595) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=595,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=595)
        0.0044555985 = weight(_text_:of in 595) [ClassicSimilarity], result of:
          0.0044555985 = score(doc=595,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.086386204 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=595)
        0.011171908 = product of:
          0.022343816 = sum of:
            0.022343816 = weight(_text_:22 in 595) [ClassicSimilarity], result of:
              0.022343816 = score(doc=595,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.19345059 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=595)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Date
    3. 2.2013 18:00:22
    Footnote
    Wien, Univ., Lehrgang Library and Information Studies, Master-Thesis, 2012.
  20. Schwarz, K.: Domain model enhanced search : a comparison of taxonomy, thesaurus and ontology (2005) 0.01
    0.009060815 = product of:
      0.040773667 = sum of:
        0.015940834 = weight(_text_:of in 4569) [ClassicSimilarity], result of:
          0.015940834 = score(doc=4569,freq=40.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3090647 = fieldWeight in 4569, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4569)
        0.024832834 = product of:
          0.049665667 = sum of:
            0.049665667 = weight(_text_:problems in 4569) [ClassicSimilarity], result of:
              0.049665667 = score(doc=4569,freq=8.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.36482072 = fieldWeight in 4569, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4569)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The results of this thesis are intended to support the information architect in designing a solution for improved search in a corporate environment. Specifically we have examined the type of search problems that require a domain model to enhance the search process. There are several approaches to modeling a domain. We have considered 3 different types of domain modeling schemes; taxonomy, thesaurus and ontology. The intention is to support the information architect in making an informed choice between one or more of these schemes. In our opinion the main criteria for this choice are the modeling characteristics of a scheme and the suitability for application in the search process. The second chapter is a discussion of modeling characteristics of each scheme, followed by a comparison between them. This should give an information architect an idea of which aspects of a domain can be modeled with each scheme. What is missing here is an indication of the effort required to model a domain with each scheme. There are too many factors that influence the amount of required effort, ranging from measurable factors like domain size and resource characteristics to cultural matters such as the willingness to share knowledge and the existence of a project champion in the team to keep the project running. The third chapter shows what role domain models can play in each part of the search process. This gives an idea of the problems that domain models can solve. We have split the search process into individual parts to show that domain models can be applied very differently in the process. The fourth chapter makes recommendations about the suitability of each individualdomain modeling scheme for improving search. Each scheme has particular characteristics that make it especially suitable for a domain or a search problem. In the appendix each case study is described in detail. These descriptions are intended to serve as a benchmark. The current problem of the enterprise can be compared to those described to see which case study is most similar, which solution was chosen, which problems arose and how they were dealt with. An important issue that we have not touched upon in this thesis is that of maintenance. The real problems of a domain model are revealed when it is applied in a search system and its deficits and wrong assumptions become clear. Adaptation and maintenance are always required. Unfortunately we have not been able to glean sufficient information about maintenance issues from our case studies to draw any meaningful conclusions.
    Content
    Master of Content and Knowledge Engineering

Authors

Years

Languages

  • d 80
  • e 45
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types