Search (390 results, page 1 of 20)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.14
    0.14420992 = product of:
      0.28841984 = sum of:
        0.06985858 = product of:
          0.20957573 = sum of:
            0.20957573 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.20957573 = score(doc=400,freq=2.0), product of:
                0.37289858 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043984205 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.0089855315 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.0089855315 = score(doc=400,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.20957573 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.20957573 = score(doc=400,freq=2.0), product of:
            0.37289858 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043984205 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(3/6)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.13055278 = product of:
      0.26110557 = sum of:
        0.046572387 = product of:
          0.13971716 = sum of:
            0.13971716 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.13971716 = score(doc=5820,freq=2.0), product of:
                0.37289858 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043984205 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.016943282 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.016943282 = score(doc=5820,freq=16.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.19758989 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.19758989 = score(doc=5820,freq=4.0), product of:
            0.37289858 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043984205 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(3/6)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.10213031 = product of:
      0.20426062 = sum of:
        0.046572387 = product of:
          0.13971716 = sum of:
            0.13971716 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13971716 = score(doc=701,freq=2.0), product of:
                0.37289858 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043984205 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.017971063 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.017971063 = score(doc=701,freq=18.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13971716 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13971716 = score(doc=701,freq=2.0), product of:
            0.37289858 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043984205 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Park, J.-r.: Evolution of concept networks and implications for knowledge representation (2007) 0.06
    0.058683153 = product of:
      0.17604946 = sum of:
        0.0129694985 = weight(_text_:information in 847) [ClassicSimilarity], result of:
          0.0129694985 = score(doc=847,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.16796975 = fieldWeight in 847, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=847)
        0.16307996 = weight(_text_:networks in 847) [ClassicSimilarity], result of:
          0.16307996 = score(doc=847,freq=18.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.7838809 = fieldWeight in 847, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=847)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to present descriptive characteristics of the historical development of concept networks. The linguistic principles, mechanisms and motivations behind the evolution of concept networks are discussed. Implications emanating from the idea of the historical development of concept networks are discussed in relation to knowledge representation and organization schemes. Design/methodology/approach - Natural language data including both speech and text are analyzed by examining discourse contexts in which a linguistic element such as a polysemy or homonym occurs. Linguistic literature on the historical development of concept networks is reviewed and analyzed. Findings - Semantic sense relations in concept networks can be captured in a systematic and regular manner. The mechanism and impetus behind the process of concept network development suggest that semantic senses in concept networks are closely intertwined with pragmatic contexts and discourse structure. The interrelation and permeability of the semantic senses of concept networks are captured on a continuum scale based on three linguistic parameters: concrete shared semantic sense; discourse and text structure; and contextualized pragmatic information. Research limitations/implications - Research findings signify the critical need for linking discourse structure and contextualized pragmatic information to knowledge representation and organization schemes. Originality/value - The idea of linguistic characteristics, principles, motivation and mechanisms underlying the evolution of concept networks provides theoretical ground for developing a model for integrating knowledge representation and organization schemes with discourse structure and contextualized pragmatic information.
  5. Priss, U.: Description logic and faceted knowledge representation (1999) 0.05
    0.046047635 = product of:
      0.09209527 = sum of:
        0.0089855315 = weight(_text_:information in 2655) [ClassicSimilarity], result of:
          0.0089855315 = score(doc=2655,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.116372846 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.06523198 = weight(_text_:networks in 2655) [ClassicSimilarity], result of:
          0.06523198 = score(doc=2655,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.017877758 = product of:
          0.035755515 = sum of:
            0.035755515 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.035755515 = score(doc=2655,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  6. Information and communication technologies : international conference; proceedings / ICT 2010, Kochi, Kerala, India, September 7 - 9, 2010 (2010) 0.05
    0.045121007 = product of:
      0.13536301 = sum of:
        0.027735729 = weight(_text_:information in 4784) [ClassicSimilarity], result of:
          0.027735729 = score(doc=4784,freq=14.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3592092 = fieldWeight in 4784, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4784)
        0.10762728 = weight(_text_:networks in 4784) [ClassicSimilarity], result of:
          0.10762728 = score(doc=4784,freq=4.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.517335 = fieldWeight in 4784, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4784)
      0.33333334 = coord(2/6)
    
    Abstract
    This book constitutes the proceedings of the International Conference on Information and Communication Technologies held in Kochi, Kerala, India in September 2010.
    LCSH
    Computer Communication Networks
    Information storage and retrieval systems
    Information systems
    Series
    Communications in computer and information science; vol.101
    Subject
    Computer Communication Networks
    Information storage and retrieval systems
    Information systems
  7. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.04
    0.044047393 = product of:
      0.13214217 = sum of:
        0.01058955 = weight(_text_:information in 4319) [ClassicSimilarity], result of:
          0.01058955 = score(doc=4319,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.13714671 = fieldWeight in 4319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.12155262 = weight(_text_:networks in 4319) [ClassicSimilarity], result of:
          0.12155262 = score(doc=4319,freq=10.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.5842703 = fieldWeight in 4319, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
      0.33333334 = coord(2/6)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
    LCSH
    Communications Engineering, Networks
    Computer Systems Organization and Communication Networks
    Subject
    Communications Engineering, Networks
    Computer Systems Organization and Communication Networks
  8. ¬The Semantic Web : research and applications ; second European Semantic WebConference, ESWC 2005, Heraklion, Crete, Greece, May 29 - June 1, 2005 ; proceedings (2005) 0.04
    0.037448075 = product of:
      0.11234422 = sum of:
        0.02009226 = weight(_text_:information in 439) [ClassicSimilarity], result of:
          0.02009226 = score(doc=439,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2602176 = fieldWeight in 439, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=439)
        0.09225196 = weight(_text_:networks in 439) [ClassicSimilarity], result of:
          0.09225196 = score(doc=439,freq=4.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.44343 = fieldWeight in 439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=439)
      0.33333334 = coord(2/6)
    
    Abstract
    This book constitutes the refereed proceedings of the Second European Semantic Web Conference, ESWC 2005, heldin Heraklion, Crete, Greece in May/June 2005. The 48 revised full papers presented were carefully reviewed and selected from 148 submissions. The papers are organized in topical sections on semantic Web services, languages, ontologies, reasoning and querying, search and information retrieval, user and communities, natural language for the semantic Web, annotation tools, and semantic Web applications.
    LCSH
    Computer Communication Networks
    Information storage and retrieval systems
    Information systems
    Subject
    Computer Communication Networks
    Information storage and retrieval systems
    Information systems
  9. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.03
    0.033745833 = product of:
      0.10123749 = sum of:
        0.0089855315 = weight(_text_:information in 3708) [ClassicSimilarity], result of:
          0.0089855315 = score(doc=3708,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.116372846 = fieldWeight in 3708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
        0.09225196 = weight(_text_:networks in 3708) [ClassicSimilarity], result of:
          0.09225196 = score(doc=3708,freq=4.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.44343 = fieldWeight in 3708, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
      0.33333334 = coord(2/6)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1673-1685
  10. Meng, K.; Ba, Z.; Ma, Y.; Li, G.: ¬A network coupling approach to detecting hierarchical linkages between science and technology (2024) 0.03
    0.033745833 = product of:
      0.10123749 = sum of:
        0.0089855315 = weight(_text_:information in 1205) [ClassicSimilarity], result of:
          0.0089855315 = score(doc=1205,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.116372846 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
        0.09225196 = weight(_text_:networks in 1205) [ClassicSimilarity], result of:
          0.09225196 = score(doc=1205,freq=4.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.44343 = fieldWeight in 1205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
      0.33333334 = coord(2/6)
    
    Abstract
    Detecting science-technology hierarchical linkages is beneficial for understanding deep interactions between science and technology (S&T). Previous studies have mainly focused on linear linkages between S&T but ignored their structural linkages. In this paper, we propose a network coupling approach to inspect hierarchical interactions of S&T by integrating their knowledge linkages and structural linkages. S&T knowledge networks are first enhanced with bidirectional encoder representation from transformers (BERT) knowledge alignment, and then their hierarchical structures are identified based on K-core decomposition. Hierarchical coupling preferences and strengths of the S&T networks over time are further calculated based on similarities of coupling nodes' degree distribution and similarities of coupling edges' weight distribution. Extensive experimental results indicate that our approach is feasible and robust in identifying the coupling hierarchy with superior performance compared to other isomorphism and dissimilarity algorithms. Our research extends the mindset of S&T linkage measurement by identifying patterns and paths of the interaction of S&T hierarchical knowledge.
    Source
    Journal of the Association for Information Science and Technology. 75(2023) no.2, S.167-187
  11. Guns, R.: Tracing the origins of the semantic web (2013) 0.03
    0.028121524 = product of:
      0.08436457 = sum of:
        0.007487943 = weight(_text_:information in 1093) [ClassicSimilarity], result of:
          0.007487943 = score(doc=1093,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.09697737 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
        0.076876625 = weight(_text_:networks in 1093) [ClassicSimilarity], result of:
          0.076876625 = score(doc=1093,freq=4.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.369525 = fieldWeight in 1093, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
      0.33333334 = coord(2/6)
    
    Abstract
    The Semantic Web has been criticized for not being semantic. This article examines the questions of why and how the Web of Data, expressed in the Resource Description Framework (RDF), has come to be known as the Semantic Web. Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors already were present in early web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older artificial intelligence work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network. Originally network representations of the meaning of natural language utterances, semantic networks have eventually come to refer to any networks with typed (usually directed) links. We discuss possible causes for this shift and suggest that it may be due to confounding paradigmatic and syntagmatic semantic relations.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.2173-2181
  12. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.03
    0.027987381 = product of:
      0.08396214 = sum of:
        0.007487943 = weight(_text_:information in 1369) [ClassicSimilarity], result of:
          0.007487943 = score(doc=1369,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.09697737 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.0764742 = weight(_text_:united in 1369) [ClassicSimilarity], result of:
          0.0764742 = score(doc=1369,freq=2.0), product of:
            0.24675635 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.043984205 = queryNorm
            0.30991787 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
      0.33333334 = coord(2/6)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
    Content
    Papers from the ISKO-UK Biennial Conference, "Knowledge Organization: Pushing the Boundaries," United Kingdom, 8-9 July, 2013, London.
  13. Kruk, S.R.; McDaniel, B.: Goals of semantic digital libraries (2009) 0.03
    0.027734347 = product of:
      0.08320304 = sum of:
        0.017971063 = weight(_text_:information in 3378) [ClassicSimilarity], result of:
          0.017971063 = score(doc=3378,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.23274569 = fieldWeight in 3378, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
        0.06523198 = weight(_text_:networks in 3378) [ClassicSimilarity], result of:
          0.06523198 = score(doc=3378,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 3378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
      0.33333334 = coord(2/6)
    
    Abstract
    Digital libraries have become commodity in the current world of Internet. More and more information is produced, and more and more non-digital information is being rendered available. The new, more user friendly, community-oriented technologies used throughout the Internet are raising the bar of expectations. Digital libraries cannot stand still with their technologies; if not for the sake of handling rapidly growing amount and diversity of information, they must provide for better user experience matching and overgrowing standards set by the industry. The next generation of digital libraries combine technological solutions, such as P2P, SOA, or Grid, with recent research on semantics and social networks. These solutions are put into practice to answer a variety of requirements imposed on digital libraries.
    Theme
    Information Gateway
  14. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.02
    0.024858069 = product of:
      0.0745742 = sum of:
        0.01339484 = weight(_text_:information in 3948) [ClassicSimilarity], result of:
          0.01339484 = score(doc=3948,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1734784 = fieldWeight in 3948, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3948)
        0.061179362 = weight(_text_:united in 3948) [ClassicSimilarity], result of:
          0.061179362 = score(doc=3948,freq=2.0), product of:
            0.24675635 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.043984205 = queryNorm
            0.2479343 = fieldWeight in 3948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=3948)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  15. Town, C.: Ontological inference for image and video analysis (2006) 0.02
    0.024739172 = product of:
      0.07421751 = sum of:
        0.0089855315 = weight(_text_:information in 132) [ClassicSimilarity], result of:
          0.0089855315 = score(doc=132,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.116372846 = fieldWeight in 132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=132)
        0.06523198 = weight(_text_:networks in 132) [ClassicSimilarity], result of:
          0.06523198 = score(doc=132,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=132)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents an approach to designing and implementing extensible computational models for perceiving systems based on a knowledge-driven joint inference approach. These models can integrate different sources of information both horizontally (multi-modal and temporal fusion) and vertically (bottom-up, top-down) by incorporating prior hierarchical knowledge expressed as an extensible ontology.Two implementations of this approach are presented. The first consists of a content-based image retrieval system that allows users to search image databases using an ontological query language. Queries are parsed using a probabilistic grammar and Bayesian networks to map high-level concepts onto low-level image descriptors, thereby bridging the 'semantic gap' between users and the retrieval system. The second application extends the notion of ontological languages to video event detection. It is shown how effective high-level state and event recognition mechanisms can be learned from a set of annotated training sequences by incorporating syntactic and semantic constraints represented by an ontology.
  16. Ibekwe-SanJuan, F.: Semantic metadata annotation : tagging Medline abstracts for enhanced information access (2010) 0.02
    0.023851654 = product of:
      0.07155496 = sum of:
        0.010375599 = weight(_text_:information in 3949) [ClassicSimilarity], result of:
          0.010375599 = score(doc=3949,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1343758 = fieldWeight in 3949, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3949)
        0.061179362 = weight(_text_:united in 3949) [ClassicSimilarity], result of:
          0.061179362 = score(doc=3949,freq=2.0), product of:
            0.24675635 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.043984205 = queryNorm
            0.2479343 = fieldWeight in 3949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=3949)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The object of this study is to develop methods for automatically annotating the argumentative role of sentences in scientific abstracts. Working from Medline abstracts, sentences were classified into four major argumentative roles: objective, method, result, and conclusion. The idea is that, if the role of each sentence can be marked up, then these metadata can be used during information retrieval to seek particular types of information such as novelty, conclusions, methodologies, aims/goals of a scientific piece of work. Design/methodology/approach - Two approaches were tested: linguistic cues and positional heuristics. Linguistic cues are lexico-syntactic patterns modelled as regular expressions implemented in a linguistic parser. Positional heuristics make use of the relative position of a sentence in the abstract to deduce its argumentative class. Findings - The experiments showed that positional heuristics attained a much higher degree of accuracy on Medline abstracts with an F-score of 64 per cent, whereas the linguistic cues only attained an F-score of 12 per cent. This is mostly because sentences from different argumentative roles are not always announced by surface linguistic cues. Research limitations/implications - A limitation to the study was the inability to test other methods to perform this task such as machine learning techniques which have been reported to perform better on Medline abstracts. Also, to compare the results of the study with earlier studies using Medline abstracts, the different argumentative roles present in Medline had to be mapped on to four major argumentative roles. This may have favourably biased the performance of the sentence classification by positional heuristics. Originality/value - To the best of one's knowledge, this study presents the first instance of evaluating linguistic cues and positional heuristics on the same corpus.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  17. Boteram, F.: "Content architecture" : semantic interoperability in an international comprehensive knowledge organisation system (2010) 0.02
    0.023851654 = product of:
      0.07155496 = sum of:
        0.010375599 = weight(_text_:information in 647) [ClassicSimilarity], result of:
          0.010375599 = score(doc=647,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1343758 = fieldWeight in 647, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=647)
        0.061179362 = weight(_text_:united in 647) [ClassicSimilarity], result of:
          0.061179362 = score(doc=647,freq=2.0), product of:
            0.24675635 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.043984205 = queryNorm
            0.2479343 = fieldWeight in 647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.03125 = fieldNorm(doc=647)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This paper seeks to develop a specified typology of various levels of semantic interoperability, designed to provide semantically expressive and functional means to interconnect typologically different sub-systems in an international comprehensive knowledge organization system, supporting advanced information retrieval and exploration strategies. Design/methodology/approach - Taking the analysis of rudimentary forms of a functional interoperability based on simple pattern matching as a starting-point, more refined strategies to provide semantic interoperability, which is actually reaching the conceptual and even thematic level, are being developed. The paper also examines the potential benefits and perspectives of the selective transfer of modelling strategies from the field of semantic technologies for the refinement of relational structures of inter-system and inter-concept relations as a requirement for expressive and functional indexing languages supporting advanced types of semantic interoperability. Findings - As the principles and strategies of advanced information retrieval systems largely depend on semantic information, new concepts and strategies to achieve semantic interoperability have to be developed. Research limitations/implications - The approach has been developed in the functional and structural context of an international comprehensive system integrating several heterogeneous knowledge organization systems and indexing languages by interconnecting them to a central conceptual structure operating as a spine in an overall system designed to support retrieval and exploration of bibliographic records representing complex conceptual entities. Originality/value - Research and development aimed at providing technical and structural interoperability has to be complemented by a thorough and precise reflection and definition of various degrees and types of interoperability on the semantic level as well. The approach specifies these levels and reflects the implications and their potential for advanced strategies of retrieval and exploration.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO).
  18. Pepper, S.; Groenmo, G.O.: Towards a general theory of scope (2002) 0.02
    0.021649845 = product of:
      0.064949535 = sum of:
        0.01058955 = weight(_text_:information in 539) [ClassicSimilarity], result of:
          0.01058955 = score(doc=539,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.13714671 = fieldWeight in 539, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
        0.054359984 = weight(_text_:networks in 539) [ClassicSimilarity], result of:
          0.054359984 = score(doc=539,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.26129362 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper is concerned with the issue of scope in topic maps. Topic maps are a form of knowledge representation suitable for solving a number of complex problems in the area of information management, ranging from findability (navigation and querying) to knowledge management and enterprise application integration (EAI). The topic map paradigm has its roots in efforts to understand the essential semantics of back-of-book indexes in order that they might be captured in a form suitable for computer processing. Once understood, the model of a back-of-book index was generalised in order to cover the needs of digital information, and extended to encompass glossaries and thesauri, as well as indexes. The resulting core model, of typed topics, associations, and occurrences, has many similarities with the semantic networks developed by the artificial intelligence community for representing knowledge structures. One key requirement of topic maps from the earliest days was to be able to merge indexes from disparate origins. This requirement accounts for two further concepts that greatly enhance the power of topic maps: subject identity and scope. This paper concentrates on scope, but also includes a brief discussion of the feature known as the topic naming constraint, with which it is closely related. It is based on the authors' experience in creating topic maps (in particular, the Italian Opera Topic Map, and in implementing processing systems for topic maps (in particular, the Ontopia Topic Map Engine and Navigator.
  19. Helbig, H.: Knowledge representation and the semantics of natural language (2014) 0.02
    0.021649845 = product of:
      0.064949535 = sum of:
        0.01058955 = weight(_text_:information in 2396) [ClassicSimilarity], result of:
          0.01058955 = score(doc=2396,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.13714671 = fieldWeight in 2396, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2396)
        0.054359984 = weight(_text_:networks in 2396) [ClassicSimilarity], result of:
          0.054359984 = score(doc=2396,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.26129362 = fieldWeight in 2396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2396)
      0.33333334 = coord(2/6)
    
    Abstract
    Natural Language is not only the most important means of communication between human beings, it is also used over historical periods for the preservation of cultural achievements and their transmission from one generation to the other. During the last few decades, the flod of digitalized information has been growing tremendously. This tendency will continue with the globalisation of information societies and with the growing importance of national and international computer networks. This is one reason why the theoretical understanding and the automated treatment of communication processes based on natural language have such a decisive social and economic impact. In this context, the semantic representation of knowledge originally formulated in natural language plays a central part, because it connects all components of natural language processing systems, be they the automatic understanding of natural language (analysis), the rational reasoning over knowledge bases, or the generation of natural language expressions from formal representations. This book presents a method for the semantic representation of natural language expressions (texts, sentences, phrases, etc.) which can be used as a universal knowledge representation paradigm in the human sciences, like linguistics, cognitive psychology, or philosophy of language, as well as in computational linguistics and in artificial intelligence. It is also an attempt to close the gap between these disciplines, which to a large extent are still working separately.
  20. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.02
    0.020615976 = product of:
      0.061847925 = sum of:
        0.007487943 = weight(_text_:information in 4706) [ClassicSimilarity], result of:
          0.007487943 = score(doc=4706,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.09697737 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4706)
        0.054359984 = weight(_text_:networks in 4706) [ClassicSimilarity], result of:
          0.054359984 = score(doc=4706,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.26129362 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4706)
      0.33333334 = coord(2/6)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.

Years

Languages

  • e 317
  • d 65
  • pt 3
  • f 1
  • More… Less…

Types

  • a 279
  • el 90
  • m 32
  • x 25
  • s 13
  • n 9
  • r 5
  • p 3
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications