Search (250 results, page 1 of 13)

  • × type_ss:"x"
  1. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.08
    0.0828137 = product of:
      0.10351712 = sum of:
        0.06597931 = product of:
          0.19793792 = sum of:
            0.19793792 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.19793792 = score(doc=4997,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.013321568 = weight(_text_:on in 4997) [ClassicSimilarity], result of:
          0.013321568 = score(doc=4997,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.012001811 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
          0.012001811 = score(doc=4997,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.13714671 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.012214432 = product of:
          0.024428863 = sum of:
            0.024428863 = weight(_text_:technology in 4997) [ClassicSimilarity], result of:
              0.024428863 = score(doc=4997,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.16453418 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.07486399 = product of:
      0.09357999 = sum of:
        0.052783445 = product of:
          0.15835033 = sum of:
            0.15835033 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.15835033 = score(doc=701,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.010657255 = weight(_text_:on in 701) [ClassicSimilarity], result of:
          0.010657255 = score(doc=701,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.097201325 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.020367747 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.020367747 = score(doc=701,freq=18.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.009771545 = product of:
          0.01954309 = sum of:
            0.01954309 = weight(_text_:technology in 701) [ClassicSimilarity], result of:
              0.01954309 = score(doc=701,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.13162735 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  3. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.06
    0.058660645 = product of:
      0.09776774 = sum of:
        0.061346166 = weight(_text_:section in 1154) [ClassicSimilarity], result of:
          0.061346166 = score(doc=1154,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.23320788 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.018458908 = weight(_text_:on in 1154) [ClassicSimilarity], result of:
          0.018458908 = score(doc=1154,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.16835764 = fieldWeight in 1154, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.017962666 = weight(_text_:information in 1154) [ClassicSimilarity], result of:
          0.017962666 = score(doc=1154,freq=14.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.20526241 = fieldWeight in 1154, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Imprint
    Roskilde : Roskilde University, Computer Science Section
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.06
    0.057605032 = product of:
      0.09600838 = sum of:
        0.015985882 = weight(_text_:on in 563) [ClassicSimilarity], result of:
          0.015985882 = score(doc=563,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.14580199 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0101838745 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=563,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06983863 = sum of:
          0.029314637 = weight(_text_:technology in 563) [ClassicSimilarity], result of:
            0.029314637 = score(doc=563,freq=2.0), product of:
              0.14847288 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.049850095 = queryNorm
              0.19744103 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.040523995 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.040523995 = score(doc=563,freq=2.0), product of:
              0.17456654 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049850095 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  5. Kirk, J.: Theorising information use : managers and their work (2002) 0.05
    0.051403057 = product of:
      0.08567176 = sum of:
        0.018650195 = weight(_text_:on in 560) [ClassicSimilarity], result of:
          0.018650195 = score(doc=560,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.17010231 = fieldWeight in 560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
        0.042838223 = weight(_text_:information in 560) [ClassicSimilarity], result of:
          0.042838223 = score(doc=560,freq=26.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.4895196 = fieldWeight in 560, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
        0.024183342 = product of:
          0.048366684 = sum of:
            0.048366684 = weight(_text_:technology in 560) [ClassicSimilarity], result of:
              0.048366684 = score(doc=560,freq=4.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.32576108 = fieldWeight in 560, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
    Content
    A thesis submitted to the University of Technology, Sydney in fulfilment of the requirements for the degree of Doctor of Philosophy. - Vgl. unter: http://epress.lib.uts.edu.au/dspace/bitstream/2100/309/2/02whole.pdf.
    Imprint
    Sydney : University of Technology / Faculty of Humanities and Social Sciences
    Theme
    Information
  6. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.05
    0.05070856 = product of:
      0.08451426 = sum of:
        0.010657255 = weight(_text_:on in 4399) [ClassicSimilarity], result of:
          0.010657255 = score(doc=4399,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.097201325 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.019202897 = weight(_text_:information in 4399) [ClassicSimilarity], result of:
          0.019202897 = score(doc=4399,freq=16.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.21943474 = fieldWeight in 4399, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.054654106 = sum of:
          0.027638106 = weight(_text_:technology in 4399) [ClassicSimilarity], result of:
            0.027638106 = score(doc=4399,freq=4.0), product of:
              0.14847288 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.049850095 = queryNorm
              0.1861492 = fieldWeight in 4399, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.03125 = fieldNorm(doc=4399)
          0.027015999 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.027015999 = score(doc=4399,freq=2.0), product of:
              0.17456654 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049850095 = queryNorm
              0.15476047 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=4399)
      0.6 = coord(3/5)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Content
    Submitted to the Faculty of the Computer Science and Engineering Department of the University of Engineering and Technology Lahore in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science (2009 - 009-PhD-CS-04). Vgl.: http://prr.hec.gov.pk/jspui/bitstream/123456789/8375/1/Taybah_Kiren_Computer_Science_HSR_2017_UET_Lahore_14.12.2017.pdf.
    Date
    20. 1.2015 18:30:22
    Imprint
    Lahore : University of Engineering and Technology / Department of Computer Science and Engineering
  7. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.05
    0.046292994 = product of:
      0.07715499 = sum of:
        0.053677894 = weight(_text_:section in 127) [ClassicSimilarity], result of:
          0.053677894 = score(doc=127,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.20405689 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.01318768 = weight(_text_:on in 127) [ClassicSimilarity], result of:
          0.01318768 = score(doc=127,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.120280504 = fieldWeight in 127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.01028941 = weight(_text_:information in 127) [ClassicSimilarity], result of:
          0.01028941 = score(doc=127,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.11757882 = fieldWeight in 127, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
      0.6 = coord(3/5)
    
    Abstract
    This thesis is organised as follows: Chapter 2 gives a general introduction to the field of information retrieval, covering its most important aspects. Further, the tasks of distributed and peer-to-peer information retrieval (P2PIR) are introduced, motivating their application and characterising the special challenges that they involve, including a review of existing architectures and search protocols in P2PIR. Finally, chapter 2 presents approaches to evaluating the e ectiveness of both traditional and peer-to-peer IR systems. Chapter 3 contains a detailed account of state-of-the-art information retrieval models and algorithms. This encompasses models for matching queries against document representations, term weighting algorithms, approaches to feedback and associative retrieval as well as distributed retrieval. It thus defines important terminology for the following chapters. The notion of "multi-level association graphs" (MLAGs) is introduced in chapter 4. An MLAG is a simple, graph-based framework that allows to model most of the theoretical and practical approaches to IR presented in chapter 3. Moreover, it provides an easy-to-grasp way of defining and including new entities into IR modeling, such as paragraphs or peers, dividing them conceptually while at the same time connecting them to each other in a meaningful way. This allows for a unified view on many IR tasks, including that of distributed and peer-to-peer search. Starting from related work and a formal defiition of the framework, the possibilities of modeling that it provides are discussed in detail, followed by an experimental section that shows how new insights gained from modeling inside the framework can lead to novel combinations of principles and eventually to improved retrieval effectiveness.
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.
  8. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.04
    0.04302746 = product of:
      0.071712434 = sum of:
        0.023073634 = weight(_text_:on in 642) [ClassicSimilarity], result of:
          0.023073634 = score(doc=642,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21044704 = fieldWeight in 642, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=642)
        0.031753805 = weight(_text_:information in 642) [ClassicSimilarity], result of:
          0.031753805 = score(doc=642,freq=28.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3628561 = fieldWeight in 642, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=642)
        0.016885 = product of:
          0.03377 = sum of:
            0.03377 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.03377 = score(doc=642,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19345059 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The past decade has seen the emergence of a new paradigm in the corporate world where organisations emphasised connectivity as a means of exposing decision-makers to wider resources of information within and outside the organisation. Many organisations followed the initiatives of enhancing infrastructures, manipulating cultural shifts and emphasising managerial commitment for creating pools and networks of knowledge. However, the concept of connectivity is not merely presenting people with the data, but more importantly, to create environments where people can seek information efficiently. This paradigm has therefore caused a shift in the function of information systems in organisations. They have to be now assessed in relation to how they underpin people's information-seeking activities within the context of their organisational environment. This research project used interpretative research methods to investigate the nature of people's information-seeking activities at two culturally contrasting organisations. Outcomes of this research project provide insights into phenomena associated with people's information-seeking function, and show how they depend on the organisational context that is defined partly by information systems. It suggests that information-seeking is not just searching for data. The inefficiencies inherent in both people and their environments can bring opaqueness into people's data, which they need to avoid or eliminate as part of seeking information. This seems to have made information-seeking a two-tier process consisting of a primary process of searching and interpreting data and auxiliary process of avoiding and eliminating opaqueness in data. Based on this view, this research suggests that organisational information systems operate naturally as implicit dual-mechanisms to underpin the above two-tier process, and that improvements to information systems should concern maintaining the balance in these dual-mechanisms.
    Date
    22. 7.2022 12:16:58
  9. Karlova-Bourbonus, N.: Automatic detection of contradictions in texts (2018) 0.04
    0.04033534 = product of:
      0.10083835 = sum of:
        0.07969101 = weight(_text_:section in 5976) [ClassicSimilarity], result of:
          0.07969101 = score(doc=5976,freq=6.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.30294594 = fieldWeight in 5976, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5976)
        0.021147337 = weight(_text_:on in 5976) [ClassicSimilarity], result of:
          0.021147337 = score(doc=5976,freq=14.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19287792 = fieldWeight in 5976, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5976)
      0.4 = coord(2/5)
    
    Abstract
    Natural language contradictions are of complex nature. As will be shown in Chapter 5, the realization of contradictions is not limited to the examples such as Socrates is a man and Socrates is not a man (under the condition that Socrates refers to the same object in the real world), which is discussed by Aristotle (Section 3.1.1). Empirical evidence (see Chapter 5 for more details) shows that only a few contradictions occurring in the real life are of that explicit (prototypical) kind. Rather, con-tradictions make use of a variety of natural language devices such as, e.g., paraphrasing, synonyms and antonyms, passive and active voice, diversity of negation expression, and figurative linguistic means such as idioms, irony, and metaphors. Additionally, the most so-phisticated kind of contradictions, the so-called implicit contradictions, can be found only when applying world knowledge and after conducting a sequence of logical operations such as e.g. in: (1.1) The first prize was given to the experienced grandmaster L. Stein who, in total, col-lected ten points (7 wins and 3 draws). Those familiar with the chess rules know that a chess player gets one point for winning and zero points for losing the game. In case of a draw, each player gets a half point. Built on this idea and by conducting some simple mathematical operations, we can infer that in the case of 7 wins and 3 draws (the second part of the sentence), a player can only collect 8.5 points and not 10 points. Hence, we observe that there is a contradiction between the first and the second parts of the sentence.
    Implicit contradictions will only partially be the subject of the present study, aiming primarily at identifying the realization mechanism and cues (Chapter 5) as well as finding the parts of contradictions by applying the state of the art algorithms for natural language processing without conducting deep meaning processing. Further in focus are the explicit and implicit contradictions that can be detected by means of explicit linguistic, structural, lexical cues, and by conducting some additional processing operations (e.g., counting the sum in order to detect contradictions arising from numerical divergencies). One should note that an additional complexity in finding contradictions can arise in case parts of the contradictions occur on different levels of realization. Thus, a contradiction can be observed on the word- and phrase-level, such as in a married bachelor (for variations of contradictions on lexical level, see Ganeev 2004), on the sentence level - between parts of a sentence or between two or more sentences, or on the text level - between the portions of a text or between the whole texts such as a contradiction between the Bible and the Quran, for example. Only contradictions arising at the level of single sentences occurring in one or more texts, as well as parts of a sentence, will be considered for the purpose of this study. Though the focus of interest will be on single sentences, it will make use of text particularities such as coreference resolution without establishing the referents in the real world. Finally, another aspect to be considered is that parts of the contradictions are not neces-sarily to appear at the same time. They can be separated by many years and centuries with or without time expression making their recognition by human and detection by machine challenging. According to Aristotle's ontological version of the LNC (Section 3.1.1), how-ever, the same time reference is required in order for two statements to be judged as a contradiction. Taking this into account, we set the borders for the study by limiting the ana-lyzed textual data thematically (only nine world events) and temporally (three days after the reported event had happened) (Section 5.1). No sophisticated time processing will thus be conducted.
  10. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.03
    0.03172035 = product of:
      0.07930088 = sum of:
        0.06597931 = product of:
          0.19793792 = sum of:
            0.19793792 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.19793792 = score(doc=855,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.013321568 = weight(_text_:on in 855) [ClassicSimilarity], result of:
          0.013321568 = score(doc=855,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.4 = coord(2/5)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  11. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.03
    0.031670067 = product of:
      0.15835033 = sum of:
        0.15835033 = product of:
          0.475051 = sum of:
            0.475051 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.475051 = score(doc=973,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  12. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03119245 = product of:
      0.07798112 = sum of:
        0.06597931 = product of:
          0.19793792 = sum of:
            0.19793792 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.19793792 = score(doc=1000,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.012001811 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.012001811 = score(doc=1000,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.4 = coord(2/5)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  13. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.028794536 = product of:
      0.07198634 = sum of:
        0.052783445 = product of:
          0.15835033 = sum of:
            0.15835033 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.15835033 = score(doc=5820,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.019202897 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.019202897 = score(doc=5820,freq=16.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.4 = coord(2/5)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  14. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.03
    0.028416147 = product of:
      0.07104037 = sum of:
        0.023762373 = weight(_text_:information in 4635) [ClassicSimilarity], result of:
          0.023762373 = score(doc=4635,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.27153665 = fieldWeight in 4635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4635)
        0.047278 = product of:
          0.094556 = sum of:
            0.094556 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.094556 = score(doc=4635,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    26.11.2005 18:39:22
    Imprint
    Potsdam : Fachhochschule, Institut für Information und Dokumentation
  15. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.02
    0.024356699 = product of:
      0.060891744 = sum of:
        0.020367749 = weight(_text_:information in 4865) [ClassicSimilarity], result of:
          0.020367749 = score(doc=4865,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.23274569 = fieldWeight in 4865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4865)
        0.040523995 = product of:
          0.08104799 = sum of:
            0.08104799 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.08104799 = score(doc=4865,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 6.2002 19:41:59
    Theme
    Information Gateway
  16. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.02
    0.024014642 = product of:
      0.040024403 = sum of:
        0.015071635 = weight(_text_:on in 2191) [ClassicSimilarity], result of:
          0.015071635 = score(doc=2191,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.13746344 = fieldWeight in 2191, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.015181223 = weight(_text_:information in 2191) [ClassicSimilarity], result of:
          0.015181223 = score(doc=2191,freq=10.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1734784 = fieldWeight in 2191, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.009771545 = product of:
          0.01954309 = sum of:
            0.01954309 = weight(_text_:technology in 2191) [ClassicSimilarity], result of:
              0.01954309 = score(doc=2191,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.13162735 = fieldWeight in 2191, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
  17. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.02
    0.02380836 = product of:
      0.0595209 = sum of:
        0.02131451 = weight(_text_:on in 4204) [ClassicSimilarity], result of:
          0.02131451 = score(doc=4204,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 4204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=4204)
        0.03820639 = product of:
          0.07641278 = sum of:
            0.07641278 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.07641278 = score(doc=4204,freq=4.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
  18. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.02
    0.021366015 = product of:
      0.035610024 = sum of:
        0.010657255 = weight(_text_:on in 535) [ClassicSimilarity], result of:
          0.010657255 = score(doc=535,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.097201325 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
        0.015181223 = weight(_text_:information in 535) [ClassicSimilarity], result of:
          0.015181223 = score(doc=535,freq=10.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1734784 = fieldWeight in 535, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
        0.009771545 = product of:
          0.01954309 = sum of:
            0.01954309 = weight(_text_:technology in 535) [ClassicSimilarity], result of:
              0.01954309 = score(doc=535,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.13162735 = fieldWeight in 535, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.03125 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
    Footnote
    A thesis submitted in ful llment of the requirements for the degree of Doctor of Philosophy Monash University, Faculty of Information Technology.
  19. Dinse, S.: ¬Die sachliche Suche im OPAC der Bibliothek des HWWA-Instituts für Wirtschaftsforschung Hamburg : eine kritische Bestandsaufnahme (1994) 0.02
    0.020935806 = product of:
      0.052339513 = sum of:
        0.031971764 = weight(_text_:on in 4216) [ClassicSimilarity], result of:
          0.031971764 = score(doc=4216,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.29160398 = fieldWeight in 4216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=4216)
        0.020367749 = weight(_text_:information in 4216) [ClassicSimilarity], result of:
          0.020367749 = score(doc=4216,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.23274569 = fieldWeight in 4216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4216)
      0.4 = coord(2/5)
    
    Abstract
    This is a thesis with special emphasis on the thesaurus for economics used in this institute
    Imprint
    Hamburg : Fachhochschule, Fb Bibliothek und Information
  20. Noy, N.F.: Knowledge representation for intelligent information retrieval in experimental sciences (1997) 0.02
    0.018539097 = product of:
      0.04634774 = sum of:
        0.023830349 = weight(_text_:on in 694) [ClassicSimilarity], result of:
          0.023830349 = score(doc=694,freq=10.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21734878 = fieldWeight in 694, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
        0.022517392 = weight(_text_:information in 694) [ClassicSimilarity], result of:
          0.022517392 = score(doc=694,freq=22.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.25731003 = fieldWeight in 694, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=694)
      0.4 = coord(2/5)
    
    Abstract
    More and more information is available on-line every day. The greater the amount of on-line information, the greater the demand for tools that process and disseminate this information. Processing electronic information in the form of text and answering users' queries about that information intelligently is one of the great challenges in natural language processing and information retrieval. The research presented in this talk is centered on the latter of these two tasks: intelligent information retrieval. In order for information to be retrieved, it first needs to be formalized in a database or knowledge base. The ontology for this formalization and assumptions it is based on are crucial to successful intelligent information retrieval. We have concentrated our effort on developing an ontology for representing knowledge in the domains of experimental sciences, molecular biology in particular. We show that existing ontological models cannot be readily applied to represent this domain adequately. For example, the fundamental notion of ontology design that every "real" object is defined as an instance of a category seems incompatible with the universe where objects can change their category as a result of experimental procedures. Another important problem is representing complex structures such as DNA, mixtures, populations of molecules, etc., that are very common in molecular biology. We present extensions that need to be made to an ontology to cover these issues: the representation of transformations that change the structure and/or category of their participants, and the component relations and spatial structures of complex objects. We demonstrate examples of how the proposed representations can be used to improve the quality and completeness of answers to user queries; discuss techniques for evaluating ontologies and show a prototype of an Information Retrieval System that we developed.

Authors

Languages

  • d 204
  • e 40
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types