Search (566 results, page 1 of 29)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.08
    0.08190869 = sum of:
      0.05450588 = product of:
        0.21802352 = sum of:
          0.21802352 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.21802352 = score(doc=400,freq=2.0), product of:
              0.38792977 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045757167 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.25 = coord(1/4)
      0.027402807 = product of:
        0.04110421 = sum of:
          0.026563652 = weight(_text_:m in 400) [ClassicSimilarity], result of:
            0.026563652 = score(doc=400,freq=4.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.23329206 = fieldWeight in 400, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
          0.014540557 = weight(_text_:a in 400) [ClassicSimilarity], result of:
            0.014540557 = score(doc=400,freq=26.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.27559727 = fieldWeight in 400, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  2. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.05
    0.05001079 = product of:
      0.10002158 = sum of:
        0.10002158 = sum of:
          0.031305563 = weight(_text_:m in 4523) [ClassicSimilarity], result of:
            0.031305563 = score(doc=4523,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.27493733 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.0067213746 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
            0.0067213746 = score(doc=4523,freq=2.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.12739488 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.061994642 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
            0.061994642 = score(doc=4523,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.38690117 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  3. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.05
    0.045195505 = product of:
      0.09039101 = sum of:
        0.09039101 = sum of:
          0.0354182 = weight(_text_:m in 3376) [ClassicSimilarity], result of:
            0.0354182 = score(doc=3376,freq=4.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.31105608 = fieldWeight in 3376, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.0053771 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
            0.0053771 = score(doc=3376,freq=2.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.10191591 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.049595714 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
            0.049595714 = score(doc=3376,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.30952093 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22
    Type
    a
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.04013944 = sum of:
      0.036337256 = product of:
        0.14534903 = sum of:
          0.14534903 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.14534903 = score(doc=701,freq=2.0), product of:
              0.38792977 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045757167 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.25 = coord(1/4)
      0.003802184 = product of:
        0.011406552 = sum of:
          0.011406552 = weight(_text_:a in 701) [ClassicSimilarity], result of:
            0.011406552 = score(doc=701,freq=36.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.2161963 = fieldWeight in 701, product of:
                6.0 = tf(freq=36.0), with freq of:
                  36.0 = termFreq=36.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.33333334 = coord(1/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.04
    0.039171237 = sum of:
      0.036337256 = product of:
        0.14534903 = sum of:
          0.14534903 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.14534903 = score(doc=5820,freq=2.0), product of:
              0.38792977 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045757167 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.25 = coord(1/4)
      0.0028339808 = product of:
        0.008501942 = sum of:
          0.008501942 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
            0.008501942 = score(doc=5820,freq=20.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.16114321 = fieldWeight in 5820, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  6. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.04
    0.037710182 = product of:
      0.075420365 = sum of:
        0.075420365 = sum of:
          0.018783338 = weight(_text_:m in 3355) [ClassicSimilarity], result of:
            0.018783338 = score(doc=3355,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.1649624 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.004032825 = weight(_text_:a in 3355) [ClassicSimilarity], result of:
            0.004032825 = score(doc=3355,freq=2.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.07643694 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.052604202 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.052604202 = score(doc=3355,freq=4.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.5 = coord(1/2)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    Type
    m
  7. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.03
    0.034731857 = product of:
      0.069463715 = sum of:
        0.069463715 = sum of:
          0.026563652 = weight(_text_:m in 987) [ClassicSimilarity], result of:
            0.026563652 = score(doc=987,freq=4.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.23329206 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.005703276 = weight(_text_:a in 987) [ClassicSimilarity], result of:
            0.005703276 = score(doc=987,freq=4.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.10809815 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.037196785 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.037196785 = score(doc=987,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
    Type
    m
  8. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.03
    0.032022886 = product of:
      0.06404577 = sum of:
        0.06404577 = sum of:
          0.018783338 = weight(_text_:m in 2418) [ClassicSimilarity], result of:
            0.018783338 = score(doc=2418,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.1649624 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.00806565 = weight(_text_:a in 2418) [ClassicSimilarity], result of:
            0.00806565 = score(doc=2418,freq=8.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.15287387 = fieldWeight in 2418, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.037196785 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.037196785 = score(doc=2418,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.5 = coord(1/2)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
    Type
    a
  9. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.03
    0.0308417 = product of:
      0.0616834 = sum of:
        0.0616834 = sum of:
          0.018783338 = weight(_text_:m in 4649) [ClassicSimilarity], result of:
            0.018783338 = score(doc=4649,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.1649624 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.005703276 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
            0.005703276 = score(doc=4649,freq=4.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.10809815 = fieldWeight in 4649, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.037196785 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.037196785 = score(doc=4649,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.5 = coord(1/2)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  10. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.03
    0.030006474 = product of:
      0.060012948 = sum of:
        0.060012948 = sum of:
          0.018783338 = weight(_text_:m in 4820) [ClassicSimilarity], result of:
            0.018783338 = score(doc=4820,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.1649624 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.004032825 = weight(_text_:a in 4820) [ClassicSimilarity], result of:
            0.004032825 = score(doc=4820,freq=2.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.07643694 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
          0.037196785 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
            0.037196785 = score(doc=4820,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.23214069 = fieldWeight in 4820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4820)
      0.5 = coord(1/2)
    
    Date
    3.12.2016 18:39:22
    Type
    a
  11. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.03
    0.028638765 = product of:
      0.05727753 = sum of:
        0.05727753 = sum of:
          0.015652781 = weight(_text_:m in 3403) [ClassicSimilarity], result of:
            0.015652781 = score(doc=3403,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.13746867 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.010627427 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
            0.010627427 = score(doc=3403,freq=20.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.20142901 = fieldWeight in 3403, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.030997321 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
            0.030997321 = score(doc=3403,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.19345059 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
      0.5 = coord(1/2)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
    Type
    a
  12. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.03
    0.027770823 = product of:
      0.055541646 = sum of:
        0.055541646 = sum of:
          0.015652781 = weight(_text_:m in 2589) [ClassicSimilarity], result of:
            0.015652781 = score(doc=2589,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.13746867 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.008891543 = weight(_text_:a in 2589) [ClassicSimilarity], result of:
            0.008891543 = score(doc=2589,freq=14.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.1685276 = fieldWeight in 2589, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
          0.030997321 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
            0.030997321 = score(doc=2589,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.19345059 = fieldWeight in 2589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2589)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
    Type
    a
  13. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.03
    0.027770823 = product of:
      0.055541646 = sum of:
        0.055541646 = sum of:
          0.015652781 = weight(_text_:m in 2645) [ClassicSimilarity], result of:
            0.015652781 = score(doc=2645,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.13746867 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.008891543 = weight(_text_:a in 2645) [ClassicSimilarity], result of:
            0.008891543 = score(doc=2645,freq=14.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.1685276 = fieldWeight in 2645, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.030997321 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.030997321 = score(doc=2645,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.5 = coord(1/2)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
    Type
    a
  14. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.023689859 = product of:
      0.047379717 = sum of:
        0.047379717 = sum of:
          0.012522225 = weight(_text_:m in 1634) [ClassicSimilarity], result of:
            0.012522225 = score(doc=1634,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.10997493 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.010059633 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
            0.010059633 = score(doc=1634,freq=28.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.19066721 = fieldWeight in 1634, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.024797857 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.024797857 = score(doc=1634,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Type
    a
  15. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.02290534 = product of:
      0.04581068 = sum of:
        0.04581068 = product of:
          0.06871602 = sum of:
            0.0067213746 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
              0.0067213746 = score(doc=6089,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12739488 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
            0.061994642 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.061994642 = score(doc=6089,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
    Type
    a
  16. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.02290534 = product of:
      0.04581068 = sum of:
        0.04581068 = product of:
          0.06871602 = sum of:
            0.0067213746 = weight(_text_:a in 539) [ClassicSimilarity], result of:
              0.0067213746 = score(doc=539,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12739488 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
            0.061994642 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.061994642 = score(doc=539,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  17. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.02
    0.022462225 = product of:
      0.04492445 = sum of:
        0.04492445 = sum of:
          0.012522225 = weight(_text_:m in 179) [ClassicSimilarity], result of:
            0.012522225 = score(doc=179,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.10997493 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.007604368 = weight(_text_:a in 179) [ClassicSimilarity], result of:
            0.007604368 = score(doc=179,freq=16.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.14413087 = fieldWeight in 179, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.024797857 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.024797857 = score(doc=179,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Type
    a
  18. Quillian, M.R.: Semantic memory (1968) 0.02
    0.020281035 = product of:
      0.04056207 = sum of:
        0.04056207 = product of:
          0.060843103 = sum of:
            0.0500889 = weight(_text_:m in 1478) [ClassicSimilarity], result of:
              0.0500889 = score(doc=1478,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.4398997 = fieldWeight in 1478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.125 = fieldNorm(doc=1478)
            0.0107542 = weight(_text_:a in 1478) [ClassicSimilarity], result of:
              0.0107542 = score(doc=1478,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.20383182 = fieldWeight in 1478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=1478)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Semantic information processing. Ed.: M. Minsky
    Type
    a
  19. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.02
    0.019424884 = product of:
      0.038849767 = sum of:
        0.038849767 = product of:
          0.05827465 = sum of:
            0.014878399 = weight(_text_:a in 3694) [ClassicSimilarity], result of:
              0.014878399 = score(doc=3694,freq=20.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.28200063 = fieldWeight in 3694, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
            0.04339625 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.04339625 = score(doc=3694,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
    Type
    a
  20. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.02
    0.019066695 = product of:
      0.03813339 = sum of:
        0.03813339 = product of:
          0.05720008 = sum of:
            0.007604368 = weight(_text_:a in 4476) [ClassicSimilarity], result of:
              0.007604368 = score(doc=4476,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.14413087 = fieldWeight in 4476, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
            0.049595714 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.049595714 = score(doc=4476,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    1.10.2018 14:13:22
    Type
    a

Years

Languages

  • e 446
  • d 103
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 145
  • m 38
  • x 24
  • s 15
  • n 13
  • p 7
  • r 6
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications