Search (405 results, page 1 of 21)

  • × theme_ss:"Wissensrepräsentation"
  1. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.06
    0.06409827 = product of:
      0.0961474 = sum of:
        0.014974909 = weight(_text_:information in 3387) [ClassicSimilarity], result of:
          0.014974909 = score(doc=3387,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.16457605 = fieldWeight in 3387, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.081172496 = sum of:
          0.039036963 = weight(_text_:management in 3387) [ClassicSimilarity], result of:
            0.039036963 = score(doc=3387,freq=2.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.22344214 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
          0.04213553 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
            0.04213553 = score(doc=3387,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.23214069 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
      0.6666667 = coord(2/3)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.06
    0.061941743 = product of:
      0.092912614 = sum of:
        0.08232375 = product of:
          0.24697125 = sum of:
            0.24697125 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24697125 = score(doc=400,freq=2.0), product of:
                0.43943653 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0518325 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.01058886 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.01058886 = score(doc=400,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  3. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.05
    0.050978534 = product of:
      0.0764678 = sum of:
        0.0088240495 = weight(_text_:information in 2589) [ClassicSimilarity], result of:
          0.0088240495 = score(doc=2589,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.09697737 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.06764375 = sum of:
          0.032530803 = weight(_text_:management in 2589) [ClassicSimilarity], result of:
            0.032530803 = score(doc=2589,freq=2.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.18620178 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.035112944 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
            0.035112944 = score(doc=2589,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.19345059 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
      0.6666667 = coord(2/3)
    
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 68(2016) no.1, S.99-111
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.05
    0.050706815 = product of:
      0.07606022 = sum of:
        0.054882504 = product of:
          0.1646475 = sum of:
            0.1646475 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1646475 = score(doc=701,freq=2.0), product of:
                0.43943653 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0518325 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.021177718 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.021177718 = score(doc=701,freq=18.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.05
    0.049899366 = product of:
      0.07484905 = sum of:
        0.054882504 = product of:
          0.1646475 = sum of:
            0.1646475 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1646475 = score(doc=5820,freq=2.0), product of:
                0.43943653 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0518325 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.019966545 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.019966545 = score(doc=5820,freq=16.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(2/3)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  6. Semantic applications (2018) 0.05
    0.047771662 = product of:
      0.07165749 = sum of:
        0.031815562 = weight(_text_:information in 5204) [ClassicSimilarity], result of:
          0.031815562 = score(doc=5204,freq=26.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.34965688 = fieldWeight in 5204, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
        0.039841935 = product of:
          0.07968387 = sum of:
            0.07968387 = weight(_text_:management in 5204) [ClassicSimilarity], result of:
              0.07968387 = score(doc=5204,freq=12.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.45609936 = fieldWeight in 5204, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5204)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Introduction.- Ontology Development.- Compliance using Metadata.- Variety Management for Big Data.- Text Mining in Economics.- Generation of Natural Language Texts.- Sentiment Analysis.- Building Concise Text Corpora from Web Contents.- Ontology-Based Modelling of Web Content.- Personalized Clinical Decision Support for Cancer Care.- Applications of Temporal Conceptual Semantic Systems.- Context-Aware Documentation in the Smart Factory.- Knowledge-Based Production Planning for Industry 4.0.- Information Exchange in Jurisdiction.- Supporting Automated License Clearing.- Managing cultural assets: Implementing typical cultural heritage archive's usage scenarios via Semantic Web technologies.- Semantic Applications for Process Management.- Domain-Specific Semantic Search Applications.
    LCSH
    Information storage and retrieval
    Management information systems
    Information Systems Applications (incl. Internet)
    Management of Computing and Information Systems
    Information Storage and Retrieval
    RSWK
    Information Retrieval
    Subject
    Information Retrieval
    Information storage and retrieval
    Management information systems
    Information Systems Applications (incl. Internet)
    Management of Computing and Information Systems
    Information Storage and Retrieval
  7. Information and communication technologies : international conference; proceedings / ICT 2010, Kochi, Kerala, India, September 7 - 9, 2010 (2010) 0.04
    0.043259062 = product of:
      0.06488859 = sum of:
        0.032684736 = weight(_text_:information in 4784) [ClassicSimilarity], result of:
          0.032684736 = score(doc=4784,freq=14.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3592092 = fieldWeight in 4784, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4784)
        0.032203853 = product of:
          0.064407706 = sum of:
            0.064407706 = weight(_text_:management in 4784) [ClassicSimilarity], result of:
              0.064407706 = score(doc=4784,freq=4.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.36866072 = fieldWeight in 4784, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4784)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the proceedings of the International Conference on Information and Communication Technologies held in Kochi, Kerala, India in September 2010.
    LCSH
    Database management
    Information storage and retrieval systems
    Information systems
    Series
    Communications in computer and information science; vol.101
    Subject
    Database management
    Information storage and retrieval systems
    Information systems
  8. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.04
    0.04321423 = product of:
      0.06482135 = sum of:
        0.017470727 = weight(_text_:information in 1633) [ClassicSimilarity], result of:
          0.017470727 = score(doc=1633,freq=16.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.1920054 = fieldWeight in 1633, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.047350623 = sum of:
          0.022771563 = weight(_text_:management in 1633) [ClassicSimilarity], result of:
            0.022771563 = score(doc=1633,freq=2.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.13034125 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
          0.02457906 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.02457906 = score(doc=1633,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.6, S.678-696
  9. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.04
    0.04273218 = product of:
      0.06409827 = sum of:
        0.009983272 = weight(_text_:information in 1634) [ClassicSimilarity], result of:
          0.009983272 = score(doc=1634,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.10971737 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.054114997 = sum of:
          0.026024643 = weight(_text_:management in 1634) [ClassicSimilarity], result of:
            0.026024643 = score(doc=1634,freq=2.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.14896142 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.028090354 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.028090354 = score(doc=1634,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.494-518
  10. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.04
    0.04273218 = product of:
      0.06409827 = sum of:
        0.009983272 = weight(_text_:information in 179) [ClassicSimilarity], result of:
          0.009983272 = score(doc=179,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.10971737 = fieldWeight in 179, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.054114997 = sum of:
          0.026024643 = weight(_text_:management in 179) [ClassicSimilarity], result of:
            0.026024643 = score(doc=179,freq=2.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.14896142 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.028090354 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.028090354 = score(doc=179,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.6666667 = coord(2/3)
    
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  11. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.04
    0.038499102 = product of:
      0.057748653 = sum of:
        0.03668089 = weight(_text_:information in 987) [ClassicSimilarity], result of:
          0.03668089 = score(doc=987,freq=24.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.40312737 = fieldWeight in 987, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.021067765 = product of:
          0.04213553 = sum of:
            0.04213553 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
              0.04213553 = score(doc=987,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.23214069 = fieldWeight in 987, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=987)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Content
    Introduction: envisioning semantic information spacesIndexing and knowledge organization -- Semantic technologies for knowledge representation -- Information retrieval and knowledge exploration -- Approaches to handle heterogeneity -- Problems with establishing semantic interoperability -- Formalization in indexing languages -- Typification of semantic relations -- Inferences in retrieval processes -- Semantic interoperability and inferences -- Remaining research questions.
    Date
    23. 7.2017 13:49:22
    LCSH
    Information retrieval
    Knowledge representation (Information theory)
    Information organization
    RSWK
    Information Retrieval
    Subject
    Information retrieval
    Knowledge representation (Information theory)
    Information organization
    Information Retrieval
  12. Semantic technologies in content management systems : trends, applications and evaluations (2012) 0.04
    0.03606396 = product of:
      0.05409594 = sum of:
        0.017291535 = weight(_text_:information in 4893) [ClassicSimilarity], result of:
          0.017291535 = score(doc=4893,freq=12.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.19003606 = fieldWeight in 4893, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4893)
        0.036804404 = product of:
          0.07360881 = sum of:
            0.07360881 = weight(_text_:management in 4893) [ClassicSimilarity], result of:
              0.07360881 = score(doc=4893,freq=16.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.42132655 = fieldWeight in 4893, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4893)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Content Management Systems (CMSs) are used in almost every industry by millions of end-user organizations. In contrast to the 90s, they are no longer used as isolated applications in one organization but they support critical core operations in business ecosystems. Content management today is more interactive and more integrative: interactive because end-users are increasingly content creators themselves and integrative because content elements can be embedded into various other applications. The authors of this book investigate how Semantic Technologies can increase interactivity and integration capabilities of CMSs and discuss their business value to millions of end-user organizations. This book has therefore the objective, to reflect existing applications as well as to discuss and present new applications for CMSs that use Semantic Technologies. An evaluation of 27 CMSs concludes this book and provides a basis for IT executives that plan to adopt or replace a CMS in the near future.
    Content
    On the Changing Market for Content Management Systems: Status and Outlook - Wolfgang Maass Empowering the Distributed Editorial Workforce - Steve McNally The Rise of Semantic-aware Applications - Stéphane Croisier Simplified Semantic Enhancement of JCR-based Content Applications -Bertrand Delacretaz and Michael Marth Dynamic Semantic Publishing - Jem Rayfield Semantics in the Domain of eGovernment - Luis Alvarez Sabucedo and Luis Anido Rifón The Interactive Knowledge Stack (IKS): A Vision for the Future of CMS - Wernher Behrendt Essential Requirements for Semantic CMS - Valentina Presutti Evaluation of Content Management Systems - Tobias Kowatsch and Wolfgang Maass CMS with No Particular Industry Focus (versch. Beiträge)
    LCSH
    Information storage and retrieval systems
    Information Systems
    Management information systems
    Subject
    Information storage and retrieval systems
    Information Systems
    Management information systems
    Theme
    Content Management System
  13. Pepper, S.: ¬The TAO of topic maps : finding the way in the age of infoglut (2002) 0.04
    0.035734028 = product of:
      0.053601038 = sum of:
        0.021397185 = weight(_text_:information in 4724) [ClassicSimilarity], result of:
          0.021397185 = score(doc=4724,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.23515764 = fieldWeight in 4724, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4724)
        0.032203853 = product of:
          0.064407706 = sum of:
            0.064407706 = weight(_text_:management in 4724) [ClassicSimilarity], result of:
              0.064407706 = score(doc=4724,freq=4.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.36866072 = fieldWeight in 4724, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4724)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Topic maps are a new ISO standard for describing knowledge structures and associating them with information resources. As such they constitute an enabling technology for knowledge management. Dubbed "the GPS of the information universe", topic maps are also destined to provide powerful new ways of navigating large and interconnected corpora. While it is possible to represent immensely complex structures using topic maps, the basic concepts of the model - Topics, Associations, and Occurrences (TAO) - are easily grasped. This paper provides a non-technical introduction to these and other concepts (the IFS and BUTS of topic maps), relating them to things that are familiar to all of us from the realms of publishing and information management, and attempting to convey some idea of the uses to which topic maps will be put in the future.
  14. Pepper, S.: Topic maps (2009) 0.03
    0.033596806 = product of:
      0.05039521 = sum of:
        0.027623646 = weight(_text_:information in 3149) [ClassicSimilarity], result of:
          0.027623646 = score(doc=3149,freq=10.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3035872 = fieldWeight in 3149, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3149)
        0.022771563 = product of:
          0.045543127 = sum of:
            0.045543127 = weight(_text_:management in 3149) [ClassicSimilarity], result of:
              0.045543127 = score(doc=3149,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2606825 = fieldWeight in 3149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3149)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Topic Maps is an international standard technology for describing knowledge structures and using them to improve the findability of information. It is based on a formal model that subsumes those of traditional finding aids such as indexes, glossaries, and thesauri, and extends them to cater for the additional complexities of digital information. Topic Maps is increasingly used in enterprise information integration, knowledge management, e-learning, and digital libraries, and as the foundation for Web-based information delivery solutions. This entry provides a comprehensive treatment of the core concepts, as well as describing the background and current status of the standard and its relationship to traditional knowledge organization techniques.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  15. Jimeno-Yepes, A.; Berlanga Llavori, R.; Rebholz-Schuhmann, D.: Ontology refinement for improved information retrieval (2010) 0.03
    0.033596806 = product of:
      0.05039521 = sum of:
        0.027623646 = weight(_text_:information in 4234) [ClassicSimilarity], result of:
          0.027623646 = score(doc=4234,freq=10.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3035872 = fieldWeight in 4234, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4234)
        0.022771563 = product of:
          0.045543127 = sum of:
            0.045543127 = weight(_text_:management in 4234) [ClassicSimilarity], result of:
              0.045543127 = score(doc=4234,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2606825 = fieldWeight in 4234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4234)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are frequently used in information retrieval being their main applications the expansion of queries, semantic indexing of documents and the organization of search results. Ontologies provide lexical items, allow conceptual normalization and provide different types of relations. However, the optimization of an ontology to perform information retrieval tasks is still unclear. In this paper, we use an ontology query model to analyze the usefulness of ontologies in effectively performing document searches. Moreover, we propose an algorithm to refine ontologies for information retrieval tasks with preliminary positive results.
    Source
    Information processing and management. 46(2010) no.4, S.426-435
  16. Davies, J.; Duke, A.; Stonkus, A.: OntoShare: evolving ontologies in a knowledge sharing system (2004) 0.03
    0.032436304 = product of:
      0.048654452 = sum of:
        0.018530503 = weight(_text_:information in 4409) [ClassicSimilarity], result of:
          0.018530503 = score(doc=4409,freq=18.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.20365247 = fieldWeight in 4409, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.030123947 = product of:
          0.060247894 = sum of:
            0.060247894 = weight(_text_:management in 4409) [ClassicSimilarity], result of:
              0.060247894 = score(doc=4409,freq=14.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.34485054 = fieldWeight in 4409, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4409)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We saw in the introduction how the Semantic Web makes possible a new generation of knowledge management tools. We now turn our attention more specifically to Semantic Web based support for virtual communities of practice. The notion of communities of practice has attracted much attention in the field of knowledge management. Communities of practice are groups within (or sometimes across) organizations who share a common set of information needs or problems. They are typically not a formal organizational unit but an informal network, each sharing in part a common agenda and shared interests or issues. In one example it was found that a lot of knowledge sharing among copier engineers took place through informal exchanges, often around a water cooler. As well as local, geographically based communities, trends towards flexible working and globalisation have led to interest in supporting dispersed communities using Internet technology. The challenge for organizations is to support such communities and make them effective. Provided with an ontology meeting the needs of a particular community of practice, knowledge management tools can arrange knowledge assets into the predefined conceptual classes of the ontology, allowing more natural and intuitive access to knowledge. Knowledge management tools must give users the ability to organize information into a controllable asset. Building an intranet-based store of information is not sufficient for knowledge management; the relationships within the stored information are vital. These relationships cover such diverse issues as relative importance, context, sequence, significance, causality and association. The potential for knowledge management tools is vast; not only can they make better use of the raw information already available, but they can sift, abstract and help to share new information, and present it to users in new and compelling ways.
    In this chapter, we describe the OntoShare system which facilitates and encourages the sharing of information between communities of practice within (or perhaps across) organizations and which encourages people - who may not previously have known of each other's existence in a large organization - to make contact where there are mutual concerns or interests. As users contribute information to the community, a knowledge resource annotated with meta-data is created. Ontologies defined using the resource description framework (RDF) and RDF Schema (RDFS) are used in this process. RDF is a W3C recommendation for the formulation of meta-data for WWW resources. RDF(S) extends this standard with the means to specify domain vocabulary and object structures - that is, concepts and the relationships that hold between them. In the next section, we describe in detail the way in which OntoShare can be used to share and retrieve knowledge and how that knowledge is represented in an RDF-based ontology. We then proceed to discuss in Section 10.3 how the ontologies in OntoShare evolve over time based on user interaction with the system and motivate our approach to user-based creation of RDF-annotated information resources. The way in which OntoShare can help to locate expertise within an organization is then described, followed by a discussion of the sociotechnical issues of deploying such a tool. Finally, a planned evaluation exercise and avenues for further research are outlined.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
  17. Stuckenschmidt, H.; Harmelen, F. van: Information sharing on the semantic web (2005) 0.03
    0.031973958 = product of:
      0.047960933 = sum of:
        0.02495818 = weight(_text_:information in 2789) [ClassicSimilarity], result of:
          0.02495818 = score(doc=2789,freq=16.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.27429342 = fieldWeight in 2789, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2789)
        0.023002753 = product of:
          0.046005506 = sum of:
            0.046005506 = weight(_text_:management in 2789) [ClassicSimilarity], result of:
              0.046005506 = score(doc=2789,freq=4.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2633291 = fieldWeight in 2789, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2789)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Classification
    ST 515 Informatik / Monographien / Einzelne Anwendungen der Datenverarbeitung / Wirtschaftsinformatik / Wissensmanagement, Information engineering
    LCSH
    Ontologies (Information retrieval)
    Knowledge management
    RSWK
    Semantic Web / Ontologie <Wissensverarbeitung> / Information Retrieval / Verteilung / Metadaten / Datenintegration
    RVK
    ST 515 Informatik / Monographien / Einzelne Anwendungen der Datenverarbeitung / Wirtschaftsinformatik / Wissensmanagement, Information engineering
    Series
    Advanced information and knowledge processing
    Subject
    Semantic Web / Ontologie <Wissensverarbeitung> / Information Retrieval / Verteilung / Metadaten / Datenintegration
    Ontologies (Information retrieval)
    Knowledge management
  18. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.03
    0.03156708 = product of:
      0.094701245 = sum of:
        0.094701245 = sum of:
          0.045543127 = weight(_text_:management in 3694) [ClassicSimilarity], result of:
            0.045543127 = score(doc=3694,freq=2.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.2606825 = fieldWeight in 3694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3694)
          0.04915812 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
            0.04915812 = score(doc=3694,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.2708308 = fieldWeight in 3694, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3694)
      0.33333334 = coord(1/3)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
  19. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.03
    0.031468242 = product of:
      0.047202364 = sum of:
        0.021177718 = weight(_text_:information in 117) [ClassicSimilarity], result of:
          0.021177718 = score(doc=117,freq=18.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.23274568 = fieldWeight in 117, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
        0.026024643 = product of:
          0.052049287 = sum of:
            0.052049287 = weight(_text_:management in 117) [ClassicSimilarity], result of:
              0.052049287 = score(doc=117,freq=8.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.29792285 = fieldWeight in 117, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  20. Widhalm, R.; Mück, T.: Topic maps : Semantische Suche im Internet (2002) 0.03
    0.029920919 = product of:
      0.044881377 = sum of:
        0.01578494 = weight(_text_:information in 4731) [ClassicSimilarity], result of:
          0.01578494 = score(doc=4731,freq=10.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.1734784 = fieldWeight in 4731, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
        0.029096438 = product of:
          0.058192875 = sum of:
            0.058192875 = weight(_text_:management in 4731) [ClassicSimilarity], result of:
              0.058192875 = score(doc=4731,freq=10.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.3330879 = fieldWeight in 4731, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4731)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Das Werk behandelt die aktuellen Entwicklungen zur inhaltlichen Erschließung von Informationsquellen im Internet. Topic Maps, semantische Modelle vernetzter Informationsressourcen unter Verwendung von XML bzw. HyTime, bieten alle notwendigen Modellierungskonstrukte, um Dokumente im Internet zu klassifizieren und ein assoziatives, semantisches Netzwerk über diese zu legen. Neben Einführungen in XML, XLink, XPointer sowie HyTime wird anhand von Einsatzszenarien gezeigt, wie diese neuartige Technologie für Content Management und Information Retrieval im Internet funktioniert. Der Entwurf einer Abfragesprache wird ebenso skizziert wie der Prototyp einer intelligenten Suchmaschine. Das Buch zeigt, wie Topic Maps den Weg zu semantisch gesteuerten Suchprozessen im Internet weisen.
    RSWK
    Content Management / Semantisches Netz / HyTime
    Content Management / Semantisches Netz / XML
    Internet / Information Retrieval / Semantisches Netz / HyTime
    Internet / Information Retrieval / Semantisches Netz / XML
    Subject
    Content Management / Semantisches Netz / HyTime
    Content Management / Semantisches Netz / XML
    Internet / Information Retrieval / Semantisches Netz / HyTime
    Internet / Information Retrieval / Semantisches Netz / XML

Years

Languages

  • e 330
  • d 67
  • pt 3
  • f 1
  • More… Less…

Types

  • a 292
  • el 93
  • m 33
  • x 25
  • s 14
  • n 9
  • r 5
  • p 2
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications