Search (2378 results, page 1 of 119)

  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.31
    0.3104071 = product of:
      0.51734513 = sum of:
        0.12252783 = product of:
          0.36758348 = sum of:
            0.36758348 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.36758348 = score(doc=1826,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.36758348 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.36758348 = score(doc=1826,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.027233787 = product of:
          0.054467574 = sum of:
            0.054467574 = weight(_text_:web in 1826) [ClassicSimilarity], result of:
              0.054467574 = score(doc=1826,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.36057037 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.26
    0.26492563 = product of:
      0.33115703 = sum of:
        0.061263915 = product of:
          0.18379174 = sum of:
            0.18379174 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.18379174 = score(doc=4997,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.18379174 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.18379174 = score(doc=4997,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.062516235 = weight(_text_:semantic in 4997) [ClassicSimilarity], result of:
          0.062516235 = score(doc=4997,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32483283 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.02358515 = product of:
          0.0471703 = sum of:
            0.0471703 = weight(_text_:web in 4997) [ClassicSimilarity], result of:
              0.0471703 = score(doc=4997,freq=6.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3122631 = fieldWeight in 4997, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.26
    0.26379827 = product of:
      0.32974783 = sum of:
        0.049011134 = product of:
          0.1470334 = sum of:
            0.1470334 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1470334 = score(doc=5820,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.03743556 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.03743556 = score(doc=5820,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.20793661 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.20793661 = score(doc=5820,freq=4.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.03536452 = weight(_text_:semantic in 5820) [ClassicSimilarity], result of:
          0.03536452 = score(doc=5820,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.18375319 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.8 = coord(4/5)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.24
    0.23525344 = product of:
      0.5881336 = sum of:
        0.1470334 = product of:
          0.44110015 = sum of:
            0.44110015 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.44110015 = score(doc=973,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.44110015 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.44110015 = score(doc=973,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  5. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.22
    0.22458757 = product of:
      0.3743126 = sum of:
        0.07428389 = weight(_text_:retrieval in 987) [ClassicSimilarity], result of:
          0.07428389 = score(doc=987,freq=14.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.5305404 = fieldWeight in 987, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.17593628 = weight(_text_:semantic in 987) [ClassicSimilarity], result of:
          0.17593628 = score(doc=987,freq=22.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.91416067 = fieldWeight in 987, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.12409244 = sum of:
          0.0864646 = weight(_text_:web in 987) [ClassicSimilarity], result of:
            0.0864646 = score(doc=987,freq=14.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.57238775 = fieldWeight in 987, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.03762784 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.03762784 = score(doc=987,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.6 = coord(3/5)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Content
    Introduction: envisioning semantic information spacesIndexing and knowledge organization -- Semantic technologies for knowledge representation -- Information retrieval and knowledge exploration -- Approaches to handle heterogeneity -- Problems with establishing semantic interoperability -- Formalization in indexing languages -- Typification of semantic relations -- Inferences in retrieval processes -- Semantic interoperability and inferences -- Remaining research questions.
    Date
    23. 7.2017 13:49:22
    LCSH
    Semantic Web
    Information retrieval
    World Wide Web / Subject access
    RSWK
    Semantic Web
    Information Retrieval
    Subject
    Semantic Web
    Information retrieval
    World Wide Web / Subject access
    Semantic Web
    Information Retrieval
  6. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.21
    0.2109694 = product of:
      0.35161567 = sum of:
        0.028076671 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.028076671 = score(doc=563,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.22055008 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22055008 = score(doc=563,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10298892 = sum of:
          0.06536108 = weight(_text_:web in 563) [ClassicSimilarity], result of:
            0.06536108 = score(doc=563,freq=8.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.43268442 = fieldWeight in 563, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.03762784 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.03762784 = score(doc=563,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  7. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.16
    0.15520355 = product of:
      0.25867257 = sum of:
        0.061263915 = product of:
          0.18379174 = sum of:
            0.18379174 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.18379174 = score(doc=4388,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.18379174 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.18379174 = score(doc=4388,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 4388) [ClassicSimilarity], result of:
              0.027233787 = score(doc=4388,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  8. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.15
    0.15416865 = product of:
      0.3854216 = sum of:
        0.0735167 = product of:
          0.22055008 = sum of:
            0.22055008 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.22055008 = score(doc=484,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.3119049 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.3119049 = score(doc=484,freq=4.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.4 = coord(2/5)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
  9. Semantic applications (2018) 0.14
    0.14236757 = product of:
      0.23727927 = sum of:
        0.057311267 = weight(_text_:retrieval in 5204) [ClassicSimilarity], result of:
          0.057311267 = score(doc=5204,freq=12.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40932083 = fieldWeight in 5204, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
        0.14661357 = weight(_text_:semantic in 5204) [ClassicSimilarity], result of:
          0.14661357 = score(doc=5204,freq=22.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.7618005 = fieldWeight in 5204, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
        0.03335444 = product of:
          0.06670888 = sum of:
            0.06670888 = weight(_text_:web in 5204) [ClassicSimilarity], result of:
              0.06670888 = score(doc=5204,freq=12.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.4416067 = fieldWeight in 5204, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5204)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This book describes proven methodologies for developing semantic applications: software applications which explicitly or implicitly uses the semantics (i.e., the meaning) of a domain terminology in order to improve usability, correctness, and completeness. An example is semantic search, where synonyms and related terms are used for enriching the results of a simple text-based search. Ontologies, thesauri or controlled vocabularies are the centerpiece of semantic applications. The book includes technological and architectural best practices for corporate use.
    Content
    Introduction.- Ontology Development.- Compliance using Metadata.- Variety Management for Big Data.- Text Mining in Economics.- Generation of Natural Language Texts.- Sentiment Analysis.- Building Concise Text Corpora from Web Contents.- Ontology-Based Modelling of Web Content.- Personalized Clinical Decision Support for Cancer Care.- Applications of Temporal Conceptual Semantic Systems.- Context-Aware Documentation in the Smart Factory.- Knowledge-Based Production Planning for Industry 4.0.- Information Exchange in Jurisdiction.- Supporting Automated License Clearing.- Managing cultural assets: Implementing typical cultural heritage archive's usage scenarios via Semantic Web technologies.- Semantic Applications for Process Management.- Domain-Specific Semantic Search Applications.
    LCSH
    Information storage and retrieval
    Information Storage and Retrieval
    RSWK
    Information Retrieval
    Semantic Web
    Subject
    Information Retrieval
    Semantic Web
    Information storage and retrieval
    Information Storage and Retrieval
    Theme
    Semantic Web
  10. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.14
    0.13723119 = product of:
      0.34307796 = sum of:
        0.08576949 = product of:
          0.25730845 = sum of:
            0.25730845 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.25730845 = score(doc=5955,freq=2.0), product of:
                0.39242527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04628742 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.25730845 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.25730845 = score(doc=5955,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  11. Atanassova, I.; Bertin, M.: Semantic facets for scientific information retrieval (2014) 0.14
    0.13643564 = product of:
      0.22739272 = sum of:
        0.05673526 = weight(_text_:retrieval in 4471) [ClassicSimilarity], result of:
          0.05673526 = score(doc=4471,freq=6.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40520695 = fieldWeight in 4471, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
        0.1515938 = weight(_text_:semantic in 4471) [ClassicSimilarity], result of:
          0.1515938 = score(doc=4471,freq=12.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.78767776 = fieldWeight in 4471, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4471)
        0.019063652 = product of:
          0.038127303 = sum of:
            0.038127303 = weight(_text_:web in 4471) [ClassicSimilarity], result of:
              0.038127303 = score(doc=4471,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.25239927 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    We present an Information Retrieval System for scientific publications that provides the possibility to filter results according to semantic facets. We use sentence-level semantic annotations that identify specific semantic relations in texts, such as methods, definitions, hypotheses, that correspond to common information needs related to scientific literature. The semantic annotations are obtained using a rule-based method that identifies linguistic clues organized into a linguistic ontology. The system is implemented using Solr Search Server and offers efficient search and navigation in scientific papers.
    Source
    Semantic Web Evaluation Challenge. SemWebEval 2014 at ESWC 2014, Anissaras, Crete, Greece, May 25-29, 2014, Revised Selected Papers. Eds.: V. Presutti et al
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.13
    0.13363014 = product of:
      0.22271688 = sum of:
        0.01871778 = weight(_text_:retrieval in 1626) [ClassicSimilarity], result of:
          0.01871778 = score(doc=1626,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.13368362 = fieldWeight in 1626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.117290854 = weight(_text_:semantic in 1626) [ClassicSimilarity], result of:
          0.117290854 = score(doc=1626,freq=22.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.60944045 = fieldWeight in 1626, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.08670825 = sum of:
          0.061623022 = weight(_text_:web in 1626) [ClassicSimilarity], result of:
            0.061623022 = score(doc=1626,freq=16.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.4079388 = fieldWeight in 1626, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
          0.025085226 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.025085226 = score(doc=1626,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Series
    Special issue: Semantic search
    Theme
    Semantic Web
    Semantisches Umfeld in Indexierung u. Retrieval
  13. Corporate Semantic Web : wie semantische Anwendungen in Unternehmen Nutzen stiften (2015) 0.13
    0.1286838 = product of:
      0.214473 = sum of:
        0.026470939 = weight(_text_:retrieval in 2246) [ClassicSimilarity], result of:
          0.026470939 = score(doc=2246,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.18905719 = fieldWeight in 2246, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2246)
        0.14581165 = weight(_text_:semantic in 2246) [ClassicSimilarity], result of:
          0.14581165 = score(doc=2246,freq=34.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.7576338 = fieldWeight in 2246, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=2246)
        0.0421904 = product of:
          0.0843808 = sum of:
            0.0843808 = weight(_text_:web in 2246) [ClassicSimilarity], result of:
              0.0843808 = score(doc=2246,freq=30.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.5585932 = fieldWeight in 2246, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2246)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Beim Corporate Semantic Web betrachtet man Semantic Web-Anwendungen, die innerhalb eines Unternehmens oder einer Organisation - kommerziell und nicht kommerziell - eingesetzt werden, von Mitarbeitern, von Kunden oder Partnern. Die Autoren erläutern prägende Erfahrungen in der Entwicklung von Semantic Web-Anwendungen. Sie berichten über Software-Architektur, Methodik, Technologieauswahl, Linked Open Data Sets, Lizenzfragen etc. Anwendungen aus den Branchen Banken, Versicherungen, Telekommunikation, Medien, Energie, Maschinenbau, Logistik, Touristik, Spielwaren, Bibliothekswesen und Kultur werden vorgestellt. Der Leser erhält so einen umfassenden Überblick über die Semantic Web-Einsatzbereiche sowie konkrete Umsetzungshinweise für eigene Vorhaben.
    Content
    Kapitel 1; Corporate Semantic Web; 1.1 Das Semantic Web; 1.2 Semantische Anwendungen im Unternehmenseinsatz; 1.3 Bereitstellen von Linked Data reicht nicht; 1.4 Eine global vernetzte Wissensbasis -- Fiktion oder Realität?; 1.5 Semantik)=)RDF?; 1.6 Richtig vorgehen; 1.7 Modellieren ist einfach (?!); 1.8 Juristische Fragen; 1.9 Semantische Anwendungen stiften Nutzen in Unternehmen -- nachweislich!; 1.10 Fazit; Literatur; Kapitel 2; Einordnung und Abgrenzung des Corporate Semantic Webs; 2.1 Grundlegende Begriffe; 2.2 Corporate Semantic Web 2.3 Public Semantic Web2.4 Social Semantic Web 3.0; 2.5 Pragmatic Web; 2.6 Zusammenfassung und Ausblick "Ubiquitous Pragmatic Web 4.0"; Literatur; Kapitel 3; Marktstudie: Welche Standards und Tools werden in Unternehmen eingesetzt?; 3.1 Einleitung; 3.2 Semantische Suche in Webarchiven (Quantinum AG); 3.2.1 Kundenanforderungen; 3.2.2 Technische Umsetzung; 3.2.3 Erfahrungswerte; 3.3 Semantische Analyse und Suche in Kundenspezifikationen (Ontos AG); 3.3.1 Kundenanforderungen; 3.3.2 Technische Umsetzung; 3.3.3 Erfahrungswerte 3.4 Sicherheit für Banken im Risikomanagement (VICO Research & Consulting GmbH)3.4.1 Kundenanforderungen; 3.4.2 Technische Umsetzung; 3.4.3 Erfahrungswerte; 3.5 Interaktive Fahrzeugdiagnose (semafora GmbH); 3.5.1 Kundenanforderungen; 3.5.2 Technische Umsetzung; 3.5.3 Erfahrungswerte; 3.6 Quo Vadis?; 3.7 Umfrage-Ergebnisse; 3.8 Semantic Web Standards & Tools; 3.9 Ausblick; Literatur; Kapitel 4; Modellierung des Sprachraums von Unternehmen; 4.1 Hintergrund; 4.2 Eine Frage der Bedeutung; 4.3 Bedeutung von Begriffen im Unternehmenskontext; 4.3.1 Website-Suche bei einem Industrieunternehmen 4.3.2 Extranet-Suche bei einem Marktforschungsunternehmen4.3.3 Intranet-Suche bei einem Fernsehsender; 4.4 Variabilität unserer Sprache und unseres Sprachgebrauchs; 4.4.1 Konsequenzen des Sprachgebrauchs; 4.5 Terminologiemanagement und Unternehmensthesaurus; 4.5.1 Unternehmensthesaurus; 4.5.2 Mut zur Lücke: Arbeiten mit unvollständigen Terminologien; 4.6 Pragmatischer Aufbau von Unternehmensthesauri; 4.6.1 Begriffsanalyse des Anwendungsbereichs; 4.6.2 Informationsquellen; 4.6.3 Häufigkeitsverteilung; 4.6.4 Aufwand und Nutzen; Literatur; Kapitel 5 Schlendern durch digitale Museen und Bibliotheken5.1 Einleitung; 5.2 Anwendungsfall 1: Schlendern durch das Digitale Museum; 5.3 Anwendungsfall 2: Literatur in Bibliotheken finden; 5.4 Herausforderungen; 5.5 Die Anforderungen treiben die Architektur; 5.5.1 Semantic ETL; 5.5.2 Semantic Logic; 5.5.3 Client; 5.6 Diskussion; 5.7 Empfehlungen und Fazit; Literatur; Kapitel 6; Semantische Suche im Bereich der Energieforschungsförderung; 6.1 Das Projekt EnArgus®; 6.2 Die Fachontologie; 6.2.1 Semantische Suche; 6.2.2 Repräsentation der semantischen Relationen in der Fachontologie
    LCSH
    Information storage and retrieval system
    RSWK
    Unternehmen / Semantic Web / Aufsatzsammlung
    Subject
    Unternehmen / Semantic Web / Aufsatzsammlung
    Information storage and retrieval system
    Theme
    Semantic Web
  14. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.13
    0.12634152 = product of:
      0.3158538 = sum of:
        0.2940668 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.2940668 = score(doc=2188,freq=2.0), product of:
            0.39242527 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04628742 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.021787029 = product of:
          0.043574058 = sum of:
            0.043574058 = weight(_text_:web in 2188) [ClassicSimilarity], result of:
              0.043574058 = score(doc=2188,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.2884563 = fieldWeight in 2188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2188)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  15. Marcondes, C.H.; Costa, L.C da.: ¬A model to represent and process scientific knowledge in biomedical articles with semantic Web technologies (2016) 0.12
    0.12484091 = product of:
      0.20806818 = sum of:
        0.023397226 = weight(_text_:retrieval in 2829) [ClassicSimilarity], result of:
          0.023397226 = score(doc=2829,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.16710453 = fieldWeight in 2829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2829)
        0.09884685 = weight(_text_:semantic in 2829) [ClassicSimilarity], result of:
          0.09884685 = score(doc=2829,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.51360583 = fieldWeight in 2829, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2829)
        0.0858241 = sum of:
          0.054467574 = weight(_text_:web in 2829) [ClassicSimilarity], result of:
            0.054467574 = score(doc=2829,freq=8.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.36057037 = fieldWeight in 2829, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2829)
          0.031356532 = weight(_text_:22 in 2829) [ClassicSimilarity], result of:
            0.031356532 = score(doc=2829,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.19345059 = fieldWeight in 2829, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2829)
      0.6 = coord(3/5)
    
    Abstract
    Knowledge organization faces the challenge of managing the amount of knowledge available on the Web. Published literature in biomedical sciences is a huge source of knowledge, which can only efficiently be managed through automatic methods. The conventional channel for reporting scientific results is Web electronic publishing. Despite its advances, scientific articles are still published in print formats such as portable document format (PDF). Semantic Web and Linked Data technologies provides new opportunities for communicating, sharing, and integrating scientific knowledge that can overcome the limitations of the current print format. Here is proposed a semantic model of scholarly electronic articles in biomedical sciences that can overcome the limitations of traditional flat records formats. Scientific knowledge consists of claims made throughout article texts, especially when semantic elements such as questions, hypotheses and conclusions are stated. These elements, although having different roles, express relationships between phenomena. Once such knowledge units are extracted and represented with technologies such as RDF (Resource Description Framework) and linked data, they may be integrated in reasoning chains. Thereby, the results of scientific research can be published and shared in structured formats, enabling crawling by software agents, semantic retrieval, knowledge reuse, validation of scientific results, and identification of traces of scientific discoveries.
    Date
    12. 3.2016 13:17:22
  16. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.12
    0.12296955 = product of:
      0.20494924 = sum of:
        0.026470939 = weight(_text_:retrieval in 168) [ClassicSimilarity], result of:
          0.026470939 = score(doc=168,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.18905719 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.100025974 = weight(_text_:semantic in 168) [ClassicSimilarity], result of:
          0.100025974 = score(doc=168,freq=16.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.51973253 = fieldWeight in 168, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.078452334 = sum of:
          0.053367104 = weight(_text_:web in 168) [ClassicSimilarity], result of:
            0.053367104 = score(doc=168,freq=12.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.35328537 = fieldWeight in 168, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.025085226 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.025085226 = score(doc=168,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.6 = coord(3/5)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    Ontologies (Information retrieval)
    Semantic integration (Computer systems)
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Ontologies (Information retrieval)
    Semantic integration (Computer systems)
    World wide web
  17. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.12
    0.122109555 = product of:
      0.20351592 = sum of:
        0.03743556 = weight(_text_:retrieval in 3926) [ClassicSimilarity], result of:
          0.03743556 = score(doc=3926,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.26736724 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.1225063 = weight(_text_:semantic in 3926) [ClassicSimilarity], result of:
          0.1225063 = score(doc=3926,freq=6.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.63653976 = fieldWeight in 3926, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.043574058 = product of:
          0.087148115 = sum of:
            0.087148115 = weight(_text_:web in 3926) [ClassicSimilarity], result of:
              0.087148115 = score(doc=3926,freq=8.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.5769126 = fieldWeight in 3926, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Modern information retrieval systems advance user experience on the basis of concept-based rather than keyword-based query answering.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  18. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.12
    0.12196996 = product of:
      0.20328327 = sum of:
        0.033088673 = weight(_text_:retrieval in 3934) [ClassicSimilarity], result of:
          0.033088673 = score(doc=3934,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.23632148 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.12503247 = weight(_text_:semantic in 3934) [ClassicSimilarity], result of:
          0.12503247 = score(doc=3934,freq=16.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.64966565 = fieldWeight in 3934, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.04516213 = product of:
          0.09032426 = sum of:
            0.09032426 = weight(_text_:web in 3934) [ClassicSimilarity], result of:
              0.09032426 = score(doc=3934,freq=22.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.59793836 = fieldWeight in 3934, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
    LCSH
    Information storage and retrieval
    RSWK
    Ontologie <Wissensverarbeitung> / Semantic Web
    Series
    Lecture Notes in Computer Scienc;10370 )(Information Systems and Applications, incl. Internet/Web, and HCI
    Subject
    Ontologie <Wissensverarbeitung> / Semantic Web
    Information storage and retrieval
    Theme
    Semantic Web
  19. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.12
    0.1214587 = product of:
      0.20243116 = sum of:
        0.056153342 = weight(_text_:retrieval in 3829) [ClassicSimilarity], result of:
          0.056153342 = score(doc=3829,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.12993754 = weight(_text_:semantic in 3829) [ClassicSimilarity], result of:
          0.12993754 = score(doc=3829,freq=12.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.67515236 = fieldWeight in 3829, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 3829) [ClassicSimilarity], result of:
              0.03268054 = score(doc=3829,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Theme
    Semantic Web
  20. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.12
    0.12046255 = product of:
      0.20077091 = sum of:
        0.023397226 = weight(_text_:retrieval in 1434) [ClassicSimilarity], result of:
          0.023397226 = score(doc=1434,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.16710453 = fieldWeight in 1434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.09884685 = weight(_text_:semantic in 1434) [ClassicSimilarity], result of:
          0.09884685 = score(doc=1434,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.51360583 = fieldWeight in 1434, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.07852683 = sum of:
          0.0471703 = weight(_text_:web in 1434) [ClassicSimilarity], result of:
            0.0471703 = score(doc=1434,freq=6.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.3122631 = fieldWeight in 1434, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
          0.031356532 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
            0.031356532 = score(doc=1434,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.19345059 = fieldWeight in 1434, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
      0.6 = coord(3/5)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik

Languages

Types

  • a 2033
  • el 235
  • m 194
  • s 70
  • x 49
  • r 19
  • b 5
  • n 2
  • i 1
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications