Search (477 results, page 1 of 24)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.18
    0.18162479 = product of:
      0.39957452 = sum of:
        0.04113365 = product of:
          0.1645346 = sum of:
            0.1645346 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.1645346 = score(doc=400,freq=2.0), product of:
                0.2927568 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034531306 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.25 = coord(1/4)
        0.1645346 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1645346 = score(doc=400,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.013711456 = weight(_text_:of in 400) [ClassicSimilarity], result of:
          0.013711456 = score(doc=400,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25392252 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.015660247 = weight(_text_:on in 400) [ClassicSimilarity], result of:
          0.015660247 = score(doc=400,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.20619515 = fieldWeight in 400, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.1645346 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.1645346 = score(doc=400,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.45454547 = coord(5/11)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.12804545 = product of:
      0.352125 = sum of:
        0.027422434 = product of:
          0.109689735 = sum of:
            0.109689735 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.109689735 = score(doc=5820,freq=2.0), product of:
                0.2927568 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034531306 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.25 = coord(1/4)
        0.15512471 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.15512471 = score(doc=5820,freq=4.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.014453146 = weight(_text_:of in 5820) [ClassicSimilarity], result of:
          0.014453146 = score(doc=5820,freq=30.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.26765788 = fieldWeight in 5820, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.15512471 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.15512471 = score(doc=5820,freq=4.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.36363637 = coord(4/11)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.12
    0.12232335 = product of:
      0.26911137 = sum of:
        0.027422434 = product of:
          0.109689735 = sum of:
            0.109689735 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.109689735 = score(doc=701,freq=2.0), product of:
                0.2927568 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034531306 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.109689735 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.109689735 = score(doc=701,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.014927144 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.014927144 = score(doc=701,freq=32.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.0073823114 = weight(_text_:on in 701) [ClassicSimilarity], result of:
          0.0073823114 = score(doc=701,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.097201325 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.109689735 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.109689735 = score(doc=701,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.45454547 = coord(5/11)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Si, L.; Zhou, J.: Ontology and linked data of Chinese great sites information resources from users' perspective (2022) 0.06
    0.062290084 = product of:
      0.22839697 = sum of:
        0.014751178 = weight(_text_:of in 1115) [ClassicSimilarity], result of:
          0.014751178 = score(doc=1115,freq=20.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.27317715 = fieldWeight in 1115, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
        0.013050207 = weight(_text_:on in 1115) [ClassicSimilarity], result of:
          0.013050207 = score(doc=1115,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.1718293 = fieldWeight in 1115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
        0.20059559 = weight(_text_:great in 1115) [ClassicSimilarity], result of:
          0.20059559 = score(doc=1115,freq=22.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            1.0316678 = fieldWeight in 1115, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1115)
      0.27272728 = coord(3/11)
    
    Abstract
    Great Sites are closely related to the residents' life, urban and rural development. In the process of rapid urbanization in China, the protection and utilization of Great Sites are facing unprecedented pressure. Effective knowl­edge organization with ontology and linked data of Great Sites is a prerequisite for their protection and utilization. In this paper, an interview is conducted to understand the users' awareness towards Great Sites to build the user-centered ontology. As for designing the Great Site ontology, firstly, the scope of Great Sites is determined. Secondly, CIDOC- CRM and OWL-Time Ontology are reused combining the results of literature research and user interviews. Thirdly, the top-level structure and the specific instances are determined to extract knowl­edge concepts of Great Sites. Fourthly, they are transformed into classes, data properties and object properties of the Great Site ontology. Later, based on the linked data technology, taking the Great Sites in Xi'an Area as an example, this paper uses D2RQ to publish the linked data set of the knowl­edge of the Great Sites and realize its opening and sharing. Semantic services such as semantic annotation, semantic retrieval and reasoning are provided based on the ontology.
  5. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.06
    0.060506184 = product of:
      0.166392 = sum of:
        0.05263353 = weight(_text_:higher in 3439) [ClassicSimilarity], result of:
          0.05263353 = score(doc=3439,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.2901765 = fieldWeight in 3439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3439)
        0.07568369 = weight(_text_:effect in 3439) [ClassicSimilarity], result of:
          0.07568369 = score(doc=3439,freq=4.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.41379923 = fieldWeight in 3439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3439)
        0.015471167 = weight(_text_:of in 3439) [ClassicSimilarity], result of:
          0.015471167 = score(doc=3439,freq=22.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.28651062 = fieldWeight in 3439, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3439)
        0.02260362 = weight(_text_:on in 3439) [ClassicSimilarity], result of:
          0.02260362 = score(doc=3439,freq=12.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.29761705 = fieldWeight in 3439, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3439)
      0.36363637 = coord(4/11)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.681-694
  6. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.06
    0.058444105 = product of:
      0.12857702 = sum of:
        0.042106826 = weight(_text_:higher in 179) [ClassicSimilarity], result of:
          0.042106826 = score(doc=179,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.23214121 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.013963064 = weight(_text_:of in 179) [ClassicSimilarity], result of:
          0.013963064 = score(doc=179,freq=28.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25858206 = fieldWeight in 179, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.014764623 = weight(_text_:on in 179) [ClassicSimilarity], result of:
          0.014764623 = score(doc=179,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19440265 = fieldWeight in 179, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.048385475 = weight(_text_:great in 179) [ClassicSimilarity], result of:
          0.048385475 = score(doc=179,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.24884763 = fieldWeight in 179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.0093570305 = product of:
          0.018714061 = sum of:
            0.018714061 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.018714061 = score(doc=179,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.45454547 = coord(5/11)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.4, S.671-685
  7. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.05
    0.04833099 = product of:
      0.17721362 = sum of:
        0.011426214 = weight(_text_:of in 4319) [ClassicSimilarity], result of:
          0.011426214 = score(doc=4319,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.21160212 = fieldWeight in 4319, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.1527372 = weight(_text_:innovations in 4319) [ClassicSimilarity], result of:
          0.1527372 = score(doc=4319,freq=6.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.6505546 = fieldWeight in 4319, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.013050207 = weight(_text_:on in 4319) [ClassicSimilarity], result of:
          0.013050207 = score(doc=4319,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.1718293 = fieldWeight in 4319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
      0.27272728 = coord(3/11)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
  8. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.04
    0.038628522 = product of:
      0.1416379 = sum of:
        0.015471167 = weight(_text_:of in 5732) [ClassicSimilarity], result of:
          0.015471167 = score(doc=5732,freq=22.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.28651062 = fieldWeight in 5732, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.107710965 = weight(_text_:technological in 5732) [ClassicSimilarity], result of:
          0.107710965 = score(doc=5732,freq=8.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.58705115 = fieldWeight in 5732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.018455777 = weight(_text_:on in 5732) [ClassicSimilarity], result of:
          0.018455777 = score(doc=5732,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.24300331 = fieldWeight in 5732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
      0.27272728 = coord(3/11)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  9. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.04
    0.03795848 = product of:
      0.13918109 = sum of:
        0.017701415 = weight(_text_:of in 1142) [ClassicSimilarity], result of:
          0.017701415 = score(doc=1142,freq=20.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.32781258 = fieldWeight in 1142, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1142)
        0.105819434 = weight(_text_:innovations in 1142) [ClassicSimilarity], result of:
          0.105819434 = score(doc=1142,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.45071742 = fieldWeight in 1142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.046875 = fieldNorm(doc=1142)
        0.015660247 = weight(_text_:on in 1142) [ClassicSimilarity], result of:
          0.015660247 = score(doc=1142,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.20619515 = fieldWeight in 1142, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1142)
      0.27272728 = coord(3/11)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  10. Biagetti, M.T.: Ontologies as knowledge organization systems (2021) 0.03
    0.031959683 = product of:
      0.1171855 = sum of:
        0.019591875 = weight(_text_:of in 439) [ClassicSimilarity], result of:
          0.019591875 = score(doc=439,freq=18.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.36282203 = fieldWeight in 439, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
        0.012919044 = weight(_text_:on in 439) [ClassicSimilarity], result of:
          0.012919044 = score(doc=439,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.17010231 = fieldWeight in 439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
        0.08467458 = weight(_text_:great in 439) [ClassicSimilarity], result of:
          0.08467458 = score(doc=439,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.43548337 = fieldWeight in 439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=439)
      0.27272728 = coord(3/11)
    
    Abstract
    This contribution presents the principal features of ontologies, drawing special attention to the comparison between ontologies and the different kinds of know­ledge organization systems (KOS). The focus is on the semantic richness exhibited by ontologies, which allows the creation of a great number of relationships between terms. That establishes ontologies as the most evolved type of KOS. The concepts of "conceptualization" and "formalization" and the key components of ontologies are described and discussed, along with upper and domain ontologies and special typologies, such as bibliographical ontologies and biomedical ontologies. The use of ontologies in the digital libraries environment, where they have replaced thesauri for query expansion in searching, and the role they are playing in the Semantic Web, especially for semantic interoperability, are sketched.
    Series
    Reviews of Concepts in Knowledge Organization
  11. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.03
    0.031160796 = product of:
      0.11425625 = sum of:
        0.011311376 = weight(_text_:of in 3061) [ClassicSimilarity], result of:
          0.011311376 = score(doc=3061,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.20947541 = fieldWeight in 3061, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.01827029 = weight(_text_:on in 3061) [ClassicSimilarity], result of:
          0.01827029 = score(doc=3061,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.24056101 = fieldWeight in 3061, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.08467458 = weight(_text_:great in 3061) [ClassicSimilarity], result of:
          0.08467458 = score(doc=3061,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.43548337 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.27272728 = coord(3/11)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  12. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.03
    0.029750496 = product of:
      0.081813864 = sum of:
        0.016689055 = weight(_text_:of in 4399) [ClassicSimilarity], result of:
          0.016689055 = score(doc=4399,freq=40.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3090647 = fieldWeight in 4399, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.0073823114 = weight(_text_:on in 4399) [ClassicSimilarity], result of:
          0.0073823114 = score(doc=4399,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.097201325 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.048385475 = weight(_text_:great in 4399) [ClassicSimilarity], result of:
          0.048385475 = score(doc=4399,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.24884763 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.0093570305 = product of:
          0.018714061 = sum of:
            0.018714061 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.018714061 = score(doc=4399,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.36363637 = coord(4/11)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Content
    Submitted to the Faculty of the Computer Science and Engineering Department of the University of Engineering and Technology Lahore in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science (2009 - 009-PhD-CS-04). Vgl.: http://prr.hec.gov.pk/jspui/bitstream/123456789/8375/1/Taybah_Kiren_Computer_Science_HSR_2017_UET_Lahore_14.12.2017.pdf.
    Date
    20. 1.2015 18:30:22
    Imprint
    Lahore : University of Engineering and Technology / Department of Computer Science and Engineering
  13. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.03
    0.02968281 = product of:
      0.108836964 = sum of:
        0.011426214 = weight(_text_:of in 4706) [ClassicSimilarity], result of:
          0.011426214 = score(doc=4706,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.21160212 = fieldWeight in 4706, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4706)
        0.08818286 = weight(_text_:innovations in 4706) [ClassicSimilarity], result of:
          0.08818286 = score(doc=4706,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.37559783 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4706)
        0.009227889 = weight(_text_:on in 4706) [ClassicSimilarity], result of:
          0.009227889 = score(doc=4706,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.121501654 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4706)
      0.27272728 = coord(3/11)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
  14. Wei, W.; Liu, Y.-P.; Wei, L-R.: Feature-level sentiment analysis based on rules and fine-grained domain ontology (2020) 0.03
    0.028078197 = product of:
      0.10295338 = sum of:
        0.011195358 = weight(_text_:of in 5876) [ClassicSimilarity], result of:
          0.011195358 = score(doc=5876,freq=8.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.20732689 = fieldWeight in 5876, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
        0.01917981 = weight(_text_:on in 5876) [ClassicSimilarity], result of:
          0.01917981 = score(doc=5876,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.25253648 = fieldWeight in 5876, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
        0.072578214 = weight(_text_:great in 5876) [ClassicSimilarity], result of:
          0.072578214 = score(doc=5876,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.37327147 = fieldWeight in 5876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.27272728 = coord(3/11)
    
    Abstract
    Mining product reviews and sentiment analysis are of great significance, whether for academic research purposes or optimizing business strategies. We propose a feature-level sentiment analysis framework based on rules parsing and fine-grained domain ontology for Chinese reviews. Fine-grained ontology is used to describe synonymous expressions of product features, which are reflected in word changes in online reviews. First, a semiautomatic construction method is developed by using Word2Vec for fine-grained ontology. Then, featurelevel sentiment analysis that combines rules parsing and the fine-grained domain ontology is conducted to extract explicit and implicit features from product reviews. Finally, the domain sentiment dictionary and context sentiment dictionary are established to identify sentiment polarities for the extracted feature-sentiment combinations. An experiment is conducted on the basis of product reviews crawled from Chinese e-commerce websites. The results demonstrate the effectiveness of our approach.
  15. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.03
    0.027068492 = product of:
      0.07443835 = sum of:
        0.03684347 = weight(_text_:higher in 1633) [ClassicSimilarity], result of:
          0.03684347 = score(doc=1633,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.20312355 = fieldWeight in 1633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.014963541 = weight(_text_:of in 1633) [ClassicSimilarity], result of:
          0.014963541 = score(doc=1633,freq=42.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2771099 = fieldWeight in 1633, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.014443932 = weight(_text_:on in 1633) [ClassicSimilarity], result of:
          0.014443932 = score(doc=1633,freq=10.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19018018 = fieldWeight in 1633, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.008187402 = product of:
          0.016374804 = sum of:
            0.016374804 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
              0.016374804 = score(doc=1633,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.1354154 = fieldWeight in 1633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1633)
          0.5 = coord(1/2)
      0.36363637 = coord(4/11)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.6, S.678-696
  16. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.03
    0.026014088 = product of:
      0.095384985 = sum of:
        0.053516448 = weight(_text_:effect in 2257) [ClassicSimilarity], result of:
          0.053516448 = score(doc=2257,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.2926002 = fieldWeight in 2257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.01745383 = weight(_text_:of in 2257) [ClassicSimilarity], result of:
          0.01745383 = score(doc=2257,freq=28.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.32322758 = fieldWeight in 2257, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.024414703 = weight(_text_:on in 2257) [ClassicSimilarity], result of:
          0.024414703 = score(doc=2257,freq=14.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.3214632 = fieldWeight in 2257, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
      0.27272728 = coord(3/11)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
  17. Kruk, S.R.; McDaniel, B.: Goals of semantic digital libraries (2009) 0.03
    0.025935518 = product of:
      0.09509689 = sum of:
        0.014810067 = weight(_text_:of in 3378) [ClassicSimilarity], result of:
          0.014810067 = score(doc=3378,freq=14.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2742677 = fieldWeight in 3378, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
        0.064626575 = weight(_text_:technological in 3378) [ClassicSimilarity], result of:
          0.064626575 = score(doc=3378,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.3522307 = fieldWeight in 3378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
        0.015660247 = weight(_text_:on in 3378) [ClassicSimilarity], result of:
          0.015660247 = score(doc=3378,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.20619515 = fieldWeight in 3378, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3378)
      0.27272728 = coord(3/11)
    
    Abstract
    Digital libraries have become commodity in the current world of Internet. More and more information is produced, and more and more non-digital information is being rendered available. The new, more user friendly, community-oriented technologies used throughout the Internet are raising the bar of expectations. Digital libraries cannot stand still with their technologies; if not for the sake of handling rapidly growing amount and diversity of information, they must provide for better user experience matching and overgrowing standards set by the industry. The next generation of digital libraries combine technological solutions, such as P2P, SOA, or Grid, with recent research on semantics and social networks. These solutions are put into practice to answer a variety of requirements imposed on digital libraries.
  18. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.03
    0.025822949 = product of:
      0.09468414 = sum of:
        0.064219736 = weight(_text_:effect in 3708) [ClassicSimilarity], result of:
          0.064219736 = score(doc=3708,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 3708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
        0.01939093 = weight(_text_:of in 3708) [ClassicSimilarity], result of:
          0.01939093 = score(doc=3708,freq=24.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3591007 = fieldWeight in 3708, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
        0.011073467 = weight(_text_:on in 3708) [ClassicSimilarity], result of:
          0.011073467 = score(doc=3708,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 3708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
      0.27272728 = coord(3/11)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1673-1685
  19. Miles, A.: SKOS: requirements for standardization (2006) 0.03
    0.025635894 = product of:
      0.093998276 = sum of:
        0.013711456 = weight(_text_:of in 5703) [ClassicSimilarity], result of:
          0.013711456 = score(doc=5703,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25392252 = fieldWeight in 5703, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5703)
        0.064626575 = weight(_text_:technological in 5703) [ClassicSimilarity], result of:
          0.064626575 = score(doc=5703,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.3522307 = fieldWeight in 5703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.046875 = fieldNorm(doc=5703)
        0.015660247 = weight(_text_:on in 5703) [ClassicSimilarity], result of:
          0.015660247 = score(doc=5703,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.20619515 = fieldWeight in 5703, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5703)
      0.27272728 = coord(3/11)
    
    Abstract
    This paper poses three questions regarding the planned development of the Simple Knowledge Organisation System (SKOS) towards W3C Recommendation status. Firstly, what is the fundamental purpose and therefore scope of SKOS? Secondly, which key software components depend on SKOS, and how do they interact? Thirdly, what is the wider technological and social context in which SKOS is likely to be applied and how might this influence design goals? Some tentative conclusions are drawn and in particular it is suggested that the scope of SKOS be restricted to the formal representation of controlled structured vocabularies intended for use within retrieval applications. However, the main purpose of this paper is to articulate the assumptions that have motivated the design of SKOS, so that these may be reviewed prior to a rigorous standardization initiative.
    Footnote
    Presented at the International Conference on Dublin Core and Metadata Applications in October 2006
  20. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.02
    0.0248525 = product of:
      0.09112583 = sum of:
        0.064219736 = weight(_text_:effect in 761) [ClassicSimilarity], result of:
          0.064219736 = score(doc=761,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.015832627 = weight(_text_:of in 761) [ClassicSimilarity], result of:
          0.015832627 = score(doc=761,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2932045 = fieldWeight in 761, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.011073467 = weight(_text_:on in 761) [ClassicSimilarity], result of:
          0.011073467 = score(doc=761,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.27272728 = coord(3/11)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.

Years

Languages

  • e 443
  • d 22
  • pt 4
  • f 1
  • sp 1
  • More… Less…

Types

  • a 349
  • el 140
  • m 28
  • x 20
  • n 13
  • s 13
  • p 6
  • r 3
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications