Search (158 results, page 1 of 8)

  • × type_ss:"x"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.34
    0.34273896 = sum of:
      0.049011134 = product of:
        0.1470334 = sum of:
          0.1470334 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.1470334 = score(doc=701,freq=2.0), product of:
              0.39242527 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04628742 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.33333334 = coord(1/3)
      0.070035525 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
        0.070035525 = score(doc=701,freq=28.0), product of:
          0.14001551 = queryWeight, product of:
            3.024915 = idf(docFreq=5836, maxDocs=44218)
            0.04628742 = queryNorm
          0.5001983 = fieldWeight in 701, product of:
            5.2915025 = tf(freq=28.0), with freq of:
              28.0 = termFreq=28.0
            3.024915 = idf(docFreq=5836, maxDocs=44218)
            0.03125 = fieldNorm(doc=701)
      0.1470334 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
        0.1470334 = score(doc=701,freq=2.0), product of:
          0.39242527 = queryWeight, product of:
            8.478011 = idf(docFreq=24, maxDocs=44218)
            0.04628742 = queryNorm
          0.3746787 = fieldWeight in 701, product of:
            1.4142135 = tf(freq=2.0), with freq of:
              2.0 = termFreq=2.0
            8.478011 = idf(docFreq=24, maxDocs=44218)
            0.03125 = fieldNorm(doc=701)
      0.06125315 = weight(_text_:semantic in 701) [ClassicSimilarity], result of:
        0.06125315 = score(doc=701,freq=6.0), product of:
          0.19245663 = queryWeight, product of:
            4.1578603 = idf(docFreq=1879, maxDocs=44218)
            0.04628742 = queryNorm
          0.31826988 = fieldWeight in 701, product of:
            2.4494898 = tf(freq=6.0), with freq of:
              6.0 = termFreq=6.0
            4.1578603 = idf(docFreq=1879, maxDocs=44218)
            0.03125 = fieldNorm(doc=701)
      0.0154057555 = product of:
        0.030811511 = sum of:
          0.030811511 = weight(_text_:web in 701) [ClassicSimilarity], result of:
            0.030811511 = score(doc=701,freq=4.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.2039694 = fieldWeight in 701, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.5 = coord(1/2)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Theme
    Semantic Web
  2. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.26
    0.26492563 = product of:
      0.33115703 = sum of:
        0.061263915 = product of:
          0.18379174 = sum of:
            0.18379174 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.18379174 = score(doc=4997,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.18379174 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.18379174 = score(doc=4997,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.062516235 = weight(_text_:semantic in 4997) [ClassicSimilarity], result of:
          0.062516235 = score(doc=4997,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32483283 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.02358515 = product of:
          0.0471703 = sum of:
            0.0471703 = weight(_text_:web in 4997) [ClassicSimilarity], result of:
              0.0471703 = score(doc=4997,freq=6.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3122631 = fieldWeight in 4997, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.26
    0.26379827 = product of:
      0.32974783 = sum of:
        0.049011134 = product of:
          0.1470334 = sum of:
            0.1470334 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1470334 = score(doc=5820,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.03743556 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.03743556 = score(doc=5820,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.20793661 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.20793661 = score(doc=5820,freq=4.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.03536452 = weight(_text_:semantic in 5820) [ClassicSimilarity], result of:
          0.03536452 = score(doc=5820,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.18375319 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.8 = coord(4/5)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.24
    0.23525344 = product of:
      0.5881336 = sum of:
        0.1470334 = product of:
          0.44110015 = sum of:
            0.44110015 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.44110015 = score(doc=973,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.44110015 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.44110015 = score(doc=973,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.21
    0.2109694 = product of:
      0.35161567 = sum of:
        0.028076671 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.028076671 = score(doc=563,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.22055008 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22055008 = score(doc=563,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10298892 = sum of:
          0.06536108 = weight(_text_:web in 563) [ClassicSimilarity], result of:
            0.06536108 = score(doc=563,freq=8.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.43268442 = fieldWeight in 563, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.03762784 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.03762784 = score(doc=563,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  6. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.16
    0.15520355 = product of:
      0.25867257 = sum of:
        0.061263915 = product of:
          0.18379174 = sum of:
            0.18379174 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.18379174 = score(doc=4388,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.18379174 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.18379174 = score(doc=4388,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 4388) [ClassicSimilarity], result of:
              0.027233787 = score(doc=4388,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.16
    0.15520355 = product of:
      0.25867257 = sum of:
        0.061263915 = product of:
          0.18379174 = sum of:
            0.18379174 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18379174 = score(doc=1000,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.18379174 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.18379174 = score(doc=1000,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
              0.027233787 = score(doc=1000,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.12
    0.1214587 = product of:
      0.20243116 = sum of:
        0.056153342 = weight(_text_:retrieval in 3829) [ClassicSimilarity], result of:
          0.056153342 = score(doc=3829,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.12993754 = weight(_text_:semantic in 3829) [ClassicSimilarity], result of:
          0.12993754 = score(doc=3829,freq=12.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.67515236 = fieldWeight in 3829, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 3829) [ClassicSimilarity], result of:
              0.03268054 = score(doc=3829,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Theme
    Semantic Web
  9. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.11
    0.10940276 = product of:
      0.18233792 = sum of:
        0.03970641 = weight(_text_:retrieval in 4333) [ClassicSimilarity], result of:
          0.03970641 = score(doc=4333,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.2835858 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.10609356 = weight(_text_:semantic in 4333) [ClassicSimilarity], result of:
          0.10609356 = score(doc=4333,freq=8.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.5512596 = fieldWeight in 4333, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.03653796 = product of:
          0.07307592 = sum of:
            0.07307592 = weight(_text_:web in 4333) [ClassicSimilarity], result of:
              0.07307592 = score(doc=4333,freq=10.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.48375595 = fieldWeight in 4333, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4333)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Theme
    Semantic Web
  10. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.10
    0.09802227 = product of:
      0.24505566 = sum of:
        0.061263915 = product of:
          0.18379174 = sum of:
            0.18379174 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.18379174 = score(doc=855,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.18379174 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.18379174 = score(doc=855,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.4 = coord(2/5)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  11. Aufreiter, M.: Informationsvisualisierung und Navigation im Semantic Web (2008) 0.07
    0.067985155 = product of:
      0.16996288 = sum of:
        0.12993754 = weight(_text_:semantic in 4711) [ClassicSimilarity], result of:
          0.12993754 = score(doc=4711,freq=12.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.67515236 = fieldWeight in 4711, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=4711)
        0.04002533 = product of:
          0.08005066 = sum of:
            0.08005066 = weight(_text_:web in 4711) [ClassicSimilarity], result of:
              0.08005066 = score(doc=4711,freq=12.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.5299281 = fieldWeight in 4711, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4711)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Der Anreiz und das Potential von Informationsvisualisierungen wird bereits häufig erkannt und der Wunsch nach deren Anwendung immer stärker. Gerade im Bereich des Wissensmanagements spielt dieses Gebiet eine immer wichtigere Rolle. Diese Arbeit beschäftigt sich mit Informationsvisualisierung im Semantic Web und vermittelt einen Überblick über aktuelle Entwicklungen zum Thema Knowledge Visualization. Zun¨achst werden grundlegende Konzepte der Informationsvisualisierung vorgestellt und deren Bedeutung in Hinblick auf das Wissensmanagement erklärt. Aus den Anforderungen, die das Semantic Web an die Informationsvisualisierungen stellt, lassen sich Kriterien ableiten, die zur Beurteilung von Visualisierungstechniken herangezogen werden können. Die ausgewählten Kriterien werden im Rahmen dieser Arbeit zu einem Kriterienkatalog zusammengefasst. Schließlich werden ausgewählte Werkzeuge beschrieben, die im Wissensmanagement bereits erfolgreich Anwendung finden. Die einzelnen Untersuchungsobjekte werden nach einer detailierten Beschreibung anhand der ausgewählten Kriterien analysiert und bewertet. Dabei wird besonders auf deren Anwendung im Kontext des Semantic Web eingegangen.
    Source
    Eine Analyse bestehender Visualisierungstechniken im Hinblick auf Eignung für das Semantic Web
    Theme
    Semantic Web
  12. Nagelschmidt, M.: Integration und Anwendung von "Semantic Web"-Technologien im betrieblichen Wissensmanagement (2012) 0.07
    0.06569917 = product of:
      0.10949861 = sum of:
        0.023397226 = weight(_text_:retrieval in 11) [ClassicSimilarity], result of:
          0.023397226 = score(doc=11,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.16710453 = fieldWeight in 11, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=11)
        0.062516235 = weight(_text_:semantic in 11) [ClassicSimilarity], result of:
          0.062516235 = score(doc=11,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32483283 = fieldWeight in 11, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=11)
        0.02358515 = product of:
          0.0471703 = sum of:
            0.0471703 = weight(_text_:web in 11) [ClassicSimilarity], result of:
              0.0471703 = score(doc=11,freq=6.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3122631 = fieldWeight in 11, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=11)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Das Wissensmanagement ist ein Themenkomplex mit zahlreichen fachlichen Bezügen, insbesondere zur Wirtschaftsinformatik und der Management-, Personal- und Organisationslehre als Teilbereiche der Betriebswirtschaftslehre. In einem weiter gefassten Verständnis bestehen aber auch Bezüge zur Organisationspsychologie, zur Informatik und zur Informationswissenschaft. Von den Entwicklungen in diesen Bezugsdisziplinen können deshalb auch Impulse für die Konzepte, Methodiken und Technologien des Wissensmanagements ausgehen. Die aus der Informatik stammende Idee, das World Wide Web (WWW) zu einem semantischen Netz auszubauen, kann als eine solche impulsgebende Entwicklung gesehen werden. Im Verlauf der vergangenen Dekade hat diese Idee einen hinreichenden Reifegrad erreicht, so dass eine potenzielle Relevanz auch für das Wissensmanagement unterstellt werden darf. Im Rahmen dieser Arbeit soll anhand eines konkreten, konzeptionellen Ansatzes demonstriert werden, wie dieser technologische Impuls für das Wissensmanagement nutzenbringend kanalisiert werden kann. Ein derartiges Erkenntnisinteresse erfordert zunächst die Erarbeitung eines operationalen Verständnisses von Wissensmanagement, auf dem die weiteren Betrachtungen aufbauen können. Es werden außerdem die Architektur und die Funktionsweise eines "Semantic Web" sowie XML und die Ontologiesprachen RDF/RDFS und OWL als maßgebliche Werkzeuge für eine ontologiebasierte Wissensrepräsentation eingeführt. Anschließend wird zur Integration und Anwendung dieser semantischen Technologien in das Wissensmanagement ein Ansatz vorgestellt, der eine weitgehend automatisierte Wissensmodellierung und daran anschließende, semantische Informationserschließung der betrieblichen Datenbasis beschreibt. Zur Veranschaulichung wird dazu auf eine fiktive Beispielwelt aus der Fertigungsindustrie zurückgegriffen. Schließlich soll der Nutzen dieser Vorgehensweise durch Anwendungsszenarien des Information Retrieval (IR) im Kontext von Geschäftsprozessen illustriert werden.
  13. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.07
    0.06521326 = product of:
      0.10868877 = sum of:
        0.01871778 = weight(_text_:retrieval in 4659) [ClassicSimilarity], result of:
          0.01871778 = score(doc=4659,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.13368362 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.079077475 = weight(_text_:semantic in 4659) [ClassicSimilarity], result of:
          0.079077475 = score(doc=4659,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.41088465 = fieldWeight in 4659, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 4659) [ClassicSimilarity], result of:
              0.021787029 = score(doc=4659,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 4659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4659)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  14. Ehlen, D.: Semantic Wiki : Konzeption eines Semantic MediaWiki für das Reallexikon zur Deutschen Kunstgeschichte (2010) 0.06
    0.06297969 = product of:
      0.15744923 = sum of:
        0.13838558 = weight(_text_:semantic in 3689) [ClassicSimilarity], result of:
          0.13838558 = score(doc=3689,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.71904814 = fieldWeight in 3689, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3689)
        0.019063652 = product of:
          0.038127303 = sum of:
            0.038127303 = weight(_text_:web in 3689) [ClassicSimilarity], result of:
              0.038127303 = score(doc=3689,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.25239927 = fieldWeight in 3689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3689)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Wikis sind ein geeignetes Mittel zur Umsetzung von umfangreichen Wissenssammlungen wie Lexika oder Enzyklopädien. Bestes Beispiel dafür bildet die weltweit erfolgreiche freie On-line-Enzyklopadie Wikipedia. Jedoch ist es mit konventionellen Wiki-Umgebungen nicht moglich das Potential der gespeicherten Texte vollends auszuschopfen. Eine neue Möglichkeit bieten semantische Wikis, deren Inhalte mithilfe von maschinenlesbaren Annotationen semantische Bezüge erhalten. Die hier vorliegende Bachelorarbeit greift dies auf und überführt Teile des "Reallexikons zur deutschen Kunstgeschichte" in ein semantisches Wiki. Aufgrund einer Semantic MediaWiki-Installation soll uberpruft werden, inwieweit die neue Technik fur die Erschließung des Lexikons genutzt werden kann. Mit einem Beispiel-Wiki für das RdK auf beigefügter CD.
    Object
    Semantic MediaWiki
    Theme
    Semantic Web
  15. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.06
    0.0626459 = product of:
      0.10440983 = sum of:
        0.04185423 = weight(_text_:retrieval in 4399) [ClassicSimilarity], result of:
          0.04185423 = score(doc=4399,freq=10.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.29892567 = fieldWeight in 4399, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.050012987 = weight(_text_:semantic in 4399) [ClassicSimilarity], result of:
          0.050012987 = score(doc=4399,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.25986627 = fieldWeight in 4399, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.012542613 = product of:
          0.025085226 = sum of:
            0.025085226 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.025085226 = score(doc=4399,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Date
    20. 1.2015 18:30:22
  16. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.06
    0.06171259 = product of:
      0.10285431 = sum of:
        0.03743556 = weight(_text_:retrieval in 2191) [ClassicSimilarity], result of:
          0.03743556 = score(doc=2191,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.26736724 = fieldWeight in 2191, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.050012987 = weight(_text_:semantic in 2191) [ClassicSimilarity], result of:
          0.050012987 = score(doc=2191,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.25986627 = fieldWeight in 2191, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.0154057555 = product of:
          0.030811511 = sum of:
            0.030811511 = weight(_text_:web in 2191) [ClassicSimilarity], result of:
              0.030811511 = score(doc=2191,freq=4.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.2039694 = fieldWeight in 2191, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.06
    0.060204204 = product of:
      0.10034034 = sum of:
        0.01871778 = weight(_text_:retrieval in 117) [ClassicSimilarity], result of:
          0.01871778 = score(doc=117,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.13368362 = fieldWeight in 117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
        0.07072904 = weight(_text_:semantic in 117) [ClassicSimilarity], result of:
          0.07072904 = score(doc=117,freq=8.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.36750638 = fieldWeight in 117, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 117) [ClassicSimilarity], result of:
              0.021787029 = score(doc=117,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  18. Stollberg, M.: Ontologiebasierte Wissensmodellierung : Verwendung als semantischer Grundbaustein des Semantic Web (2002) 0.06
    0.057643842 = product of:
      0.1441096 = sum of:
        0.10719301 = weight(_text_:semantic in 4495) [ClassicSimilarity], result of:
          0.10719301 = score(doc=4495,freq=24.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.55697227 = fieldWeight in 4495, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4495)
        0.0369166 = product of:
          0.0738332 = sum of:
            0.0738332 = weight(_text_:web in 4495) [ClassicSimilarity], result of:
              0.0738332 = score(doc=4495,freq=30.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.48876905 = fieldWeight in 4495, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4495)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Der in Kapitel B behandelte Schwerpunkt ist die Ontologie-Entwicklung. Nach der Erfassung der grundlegenden Charakteristika ontologiebasierter Wissensmodellierung stehen hier die Anforderungen bei der Erstellung einer Ontologie im Vordergrund. Dazu werden die wesentlichen diesbezüglichen Errungenschaften des sogenannten Ontology Engineering erörtert. Es werden zunächst methodologische Ansätze für den Entwicklungsprozess von Ontologien sowie für die einzelnen Aufgabengebiete entwickelter Techniken und Verfahren vorgestellt. Anschließend daran werden Design-Kriterien und ein Ansatz zur Meta-Modellierung besprochen, welche der Qualitätssicherung einer Ontologie dienen sollen. Diese Betrachtungen sollen eine Übersicht über den Erkenntnisstand des Ontology Engineering geben, womit ein wesentlicher Aspekt zur Nutzung ontologiebasierter Verfahren der Wissensmodellierung im Semantic Web abgedeckt wird. Als letzter Aspekt zur Erfassung der Charakteristika ontologiebasierter Wissensmodellierung wird in Kapitel C die Fragestellung bearbeitet, wie Ontologien in Informationssystemen eingesetzt werden können. Dazu werden zunächst die Verwendungsmöglichkeiten von Ontologien identifiziert. Dann werden Anwendungsgebiete von Ontologien vorgestellt, welche zum einen Beispiele für die aufgefundenen Einsatzmöglichkeiten darstellen und zum anderen im Hinblick auf die Untersuchung der Verwendung von Ontologien im Semantic Web grundlegende Aspekte desselben erörtern sollen. Im Anschluss daran werden die wesentlichen softwaretechnischen Herausforderungen besprochen, die sich durch die Verwendung von Ontologien in Informationssystemen ergeben. Damit wird die Erarbeitung der wesentlichen Charakteristika ontologiebasierter Verfahren der Wissensmodellierung als erstem Teil dieser Arbeit abgeschlossen.
    Basierend auf diesen Abhandlungen wird in Kapitel D die Verwendung von Ontologien im Semantic Web behandelt. Dabei ist das Semantic Web nicht als computergestützte Lösung für ein konkretes Anwendungsgebiet zu verstehen, sondern - ähnlich wie existente Web-Technologien - als eine informationstechnische Infrastruktur zur Bereitstellung und Verknüpfung von Applikationen für verschiedene Anwendungsgebiete. Die technologischen Lösungen zur Umsetzung des Semantic Web befinden sich noch in der Entwicklungsphase. Daher werden zunächst die grundlegenden Ideen der Vision des Semantic Web genauer erläutert und das antizipierte Architekturmodell zur Realisierung derselben vorgestellt, wobei insbesondere die darin angestrebte Rolle von Ontologien herausgearbeitet wird. Anschließend daran wird die formale Darstellung von Ontologien durch web-kompatible Sprachen erörtert, wodurch die Verwendung von Ontologien im Semantic Web ermöglicht werden soll. In diesem Zusammenhang sollen ferner die Beweggründe für die Verwendung von Ontologien als bedeutungsdefinierende Konstrukte im Semantic Web verdeutlicht sowie die auftretenden Herausforderungen hinsichtlich der Handhabung von Ontologien aufgezeigt werden. Dazu werden als dritter Aspekt des Kapitels entsprechende Lösungsansätze des Ontologie-Managements diskutiert. Abschließend wird auf die Implikationen für konkrete Anwendungen der Semantic Web - Technologien eingegangen, die aus der Verwendung von Ontologien im Semantic Web resultieren. Zum Abschluss der Ausführungen werden die Ergebnisse der Untersuchung zusammengefasst. Dabei soll auch eine kritische Betrachtung bezüglich der Notwendigkeit semantischer Web-Technologien sowie der Realisierbarkeit der Vision des Semantic Web vorgenommen werden.
  19. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.05
    0.053013496 = product of:
      0.088355824 = sum of:
        0.011698613 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.011698613 = score(doc=4232,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.049423426 = weight(_text_:semantic in 4232) [ClassicSimilarity], result of:
          0.049423426 = score(doc=4232,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.25680292 = fieldWeight in 4232, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.027233787 = product of:
          0.054467574 = sum of:
            0.054467574 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
              0.054467574 = score(doc=4232,freq=32.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.36057037 = fieldWeight in 4232, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
    Theme
    Semantic Web
  20. Woitas, K.: Bibliografische Daten, Normdaten und Metadaten im Semantic Web : Konzepte der bibliografischen Kontrolle im Wandel (2010) 0.05
    0.051718064 = product of:
      0.12929516 = sum of:
        0.09884685 = weight(_text_:semantic in 115) [ClassicSimilarity], result of:
          0.09884685 = score(doc=115,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.51360583 = fieldWeight in 115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
        0.030448299 = product of:
          0.060896598 = sum of:
            0.060896598 = weight(_text_:web in 115) [ClassicSimilarity], result of:
              0.060896598 = score(doc=115,freq=10.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.40312994 = fieldWeight in 115, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=115)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Bibliografische Daten, Normdaten und Metadaten im Semantic Web - Konzepte der Bibliografischen Kontrolle im Wandel. Der Titel dieser Arbeit zielt in ein essentielles Feld der Bibliotheks- und Informationswissenschaft, die Bibliografische Kontrolle. Als zweites zentrales Konzept wird der in der Weiterentwicklung des World Wide Webs (WWW) bedeutsame Begriff des Semantic Webs genannt. Auf den ersten Blick handelt es sich hier um einen ungleichen Wettstreit. Auf der einen Seite die Bibliografische Kontrolle, welche die Methoden und Mittel zur Erschließung von bibliothekarischen Objekten umfasst und traditionell in Form von formal-inhaltlichen Surrogaten in Katalogen daherkommt. Auf der anderen Seite das Buzzword Semantic Web mit seinen hochtrabenden Konnotationen eines durch Selbstreferenzialität "bedeutungstragenden", wenn nicht sogar "intelligenten" Webs. Wie kamen also eine wissenschaftliche Bibliothekarin und ein Mitglied des World Wide Web Consortiums 2007 dazu, gemeinsam einen Aufsatz zu publizieren und darin zu behaupten, das semantische Netz würde ein "bibliothekarischeres" Netz sein? Um sich dieser Frage zu nähern, soll zunächst kurz die historische Entwicklung der beiden Informationssphären Bibliothek und WWW gemeinsam betrachtet werden. Denn so oft - und völlig zurecht - die informationelle Revolution durch das Internet beschworen wird, so taucht auch immer wieder das Analogon einer weltweiten, virtuellen Bibliothek auf. Genauer gesagt, nahmen die theoretischen Überlegungen, die später zur Entwicklung des Internets führen sollten, ihren Ausgangspunkt (neben Kybernetik und entstehender Computertechnik) beim Konzept des Informationsspeichers Bibliothek.
    Theme
    Semantic Web

Languages

  • d 119
  • e 34
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types