Search (73 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.11
    0.11449528 = product of:
      0.2862382 = sum of:
        0.07155955 = product of:
          0.21467863 = sum of:
            0.21467863 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21467863 = score(doc=400,freq=2.0), product of:
                0.38197818 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045055166 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.21467863 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.21467863 = score(doc=400,freq=2.0), product of:
            0.38197818 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045055166 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.4 = coord(2/5)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Frâncu, V.: Subjects in FRBR and poly-hierarchical thesauri as possible knowledge organization tools (2006) 0.03
    0.02948505 = product of:
      0.073712625 = sum of:
        0.053347398 = weight(_text_:bibliographic in 259) [ClassicSimilarity], result of:
          0.053347398 = score(doc=259,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.30414405 = fieldWeight in 259, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=259)
        0.020365225 = product of:
          0.04073045 = sum of:
            0.04073045 = weight(_text_:searching in 259) [ClassicSimilarity], result of:
              0.04073045 = score(doc=259,freq=2.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.22347288 = fieldWeight in 259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=259)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper presents the possibilities offered by poly-hierarchical conceptual structures as knowledge organizers, starting from the FRBR entity-relation model. Of the ten entities defined in the FRBR model, the first six, the bibliographic entities plus those representing the intellectual responsibilities, are clearly described by their attributes. Unlike those the other four representing subjects in their own right: concepts, objects, events and places only have the term for the entity as attribute. Subjects have to be more extensively treated in a revised version of the FRBR model, with particular attention for the semantic and syntactic relations between concepts representing subjects themselves and between these concepts and terms used in indexing. The conceptual model of poly-hierarchical thesauri is regarded as an entity-relation model, one capable to accommodate both conceptually and relationally subjects in the bibliographic universe. Polyhierarchical thesauri are considered as frameworks or templates meant to enhance knowledge representation and to support information searching.
  3. Padmavathi, T.; Krishnamurthy, M.: Ontological representation of knowledge for developing information services in food science and technology (2012) 0.02
    0.018785488 = product of:
      0.093927436 = sum of:
        0.093927436 = weight(_text_:line in 839) [ClassicSimilarity], result of:
          0.093927436 = score(doc=839,freq=2.0), product of:
            0.25266227 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.045055166 = queryNorm
            0.37175092 = fieldWeight in 839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.046875 = fieldNorm(doc=839)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge explosion in various fields during recent years has resulted in the creation of vast amounts of on-line scientific literature. Food Science &Technology (FST) is also an important subject domain where rapid developments are taking place due to diverse research and development activities. As a result, information storage and retrieval has become very complex and current information retrieval systems (IRs) are being challenged in terms of both adequate precision and response time. To overcome these limitations as well as to provide naturallanguage based effective retrieval, a suitable knowledge engineering framework needs to be applied to represent, share and discover information. Semantic web technologies provide mechanisms for creating knowledge bases, ontologies and rules for handling data that promise to improve the quality of information retrieval. Ontologies are the backbone of such knowledge systems. This paper presents a framework for semantic representation of a large repository of content in the domain of FST.
  4. Kruk, S.R.; Cygan, M.; Gzella, A.; Woroniecki, T.; Dabrowski, M.: JeromeDL: the social semantic digital library (2009) 0.01
    0.012803378 = product of:
      0.064016886 = sum of:
        0.064016886 = weight(_text_:bibliographic in 3383) [ClassicSimilarity], result of:
          0.064016886 = score(doc=3383,freq=4.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.3649729 = fieldWeight in 3383, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3383)
      0.2 = coord(1/5)
    
    Abstract
    The initial research on semantic digital libraries resulted in the design and implementation of JeromeDL; current research on online social networking and information discovery delivered new sets of features that were implemented in JeromeDL. Eventually, this digital library has been redesigned to follow the architecture of a social semantic digital library. JeromeDL describes each resource using three types of metadata: structure, bibliographic and community. It delivers services leveraging each of these information types. Annotations based on the structure and legacy metadata, and bibliographic ontology are rendered to the users in one, mixed, representation of library resources. Community annotations are managed by separate services, such as social semantic collaborative filtering or blogging component
  5. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.01
    0.009975311 = product of:
      0.049876556 = sum of:
        0.049876556 = sum of:
          0.028511317 = weight(_text_:searching in 1633) [ClassicSimilarity], result of:
            0.028511317 = score(doc=1633,freq=2.0), product of:
              0.18226127 = queryWeight, product of:
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.045055166 = queryNorm
              0.15643102 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
          0.021365236 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.021365236 = score(doc=1633,freq=2.0), product of:
              0.15777552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045055166 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
      0.2 = coord(1/5)
    
    Date
    20. 1.2015 18:30:22
  6. Becker, H.-G.; Förster, F.: Vernetztes Wissen : Ereignisse in der bibliografischen Dokumentation (2010) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 3494) [ClassicSimilarity], result of:
          0.04526677 = score(doc=3494,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3494)
      0.2 = coord(1/5)
    
    Abstract
    Innerhalb der Gedächtnisinstitutionen Bibliothek, Museum und Archiv gibt es je eigene Beschreibungsmodelle der beherbergten Objekte und Materialien. Für eine genauere bibliografische Erschließung wurde im Bibliotheksbereich das von Benutzerbedürfnissen ausgehende, statische Modell "Functional Requirements for Bibliographic Records" (FRBR) geschaffen, dessen ungenauer »Werk«-Begriff ebenso thematisiert wird wie die schwer zu realisierende Übertragbarkeit des Modells auf Nicht-Buchmaterialien. Die Museumswelt orientiert die Darstellung ihrer Bestände am CIDOC Conceptual Reference Model (CRM), das sich hinsichtlich der Beschreibung heterogener Museumsobjekte, also Artefakten künstlerischer und intellektueller Gestaltung, als hilfreich erwiesen hat. In gegenseitigem Austausch zwischen IFLA und ICOM wurde FRBR mit CRM harmonisiert. Das Ergebnis, FRBRoo (objektorientiertes FRBR), zeigt seine Vorzüge zum einen in einer strengeren Interpretation der Entitäten der Gruppe 1 des FRBR-Modells und zum anderen in einer genaueren Abbildung von Prozessen bzw. Ereignissen. Beispiele zum Anwendungsbezug von FRBRoo zeigen dessen Zugewinn für die wissenschaftliche Erschließung hand-, druck- und online-schriftlicher Quellen, Werken der Darstellenden Kunst, Landkarten und Musikalien innerhalb einer CRM-basierten Datenbank.
  7. Melgar Estrada, L.M.: Topic maps from a knowledge organization perspective (2011) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 4298) [ClassicSimilarity], result of:
          0.04526677 = score(doc=4298,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 4298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=4298)
      0.2 = coord(1/5)
    
    Abstract
    This article comprises a literature review and conceptual analysis of Topic Maps-the ISO standard for representing information about the structure of information resources-according to the principles of Knowledge Organization (KO). Using the main principles from this discipline, the study shows how Topic Maps is proposed as an ontology model independent of technology. Topic Maps constitutes a 'bibliographic' meta-language able to represent, extend, and integrate almost all existing Knowledge Organization Systems (KOS) in a standards-based generic model applicable to digital content and to the Web. This report also presents an inventory of the current applications of Topic Maps in Libraries, Archives, and Museums (LAM), as well as in the Digital Humanities. Finally, some directions for further research are suggested, which relate Topic Maps to the main research trends in KO.
  8. Sperber, W.; Ion, P.D.F.: Content analysis and classification in mathematics (2011) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 4818) [ClassicSimilarity], result of:
          0.04526677 = score(doc=4818,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 4818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=4818)
      0.2 = coord(1/5)
    
    Abstract
    The number of publications in mathematics increases faster each year. Presently far more than 100,000 mathematically relevant journal articles and books are published annually. Efficient and high-quality content analysis of this material is important for mathematical bibliographic services such as ZBMath or MathSciNet. Content analysis has different facets and levels: classification, keywords, abstracts and reviews, and (in the future) formula analysis. It is the opinion of the authors that the different levels have to be enhanced and combined using the methods and technology of the Semantic Web. In the presentation, the problems and deficits of the existing methods and tools, the state of the art and current activities are discussed. As a first step, the Mathematical Subject Classification Scheme (MSC), has been encoded with Simple Knowledge Organization System (SKOS) and Resource Description Framework (RDF) at its recent revision to MSC2010. The use of SKOS principally opens new possibilities for the enrichment and wider deployment of this classification scheme and for machine-based content analysis of mathematical publications.
  9. Campbell, D.G.: Farradane's relational indexing and its relationship to hyperlinking in Alzheimer's information (2012) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 847) [ClassicSimilarity], result of:
          0.04526677 = score(doc=847,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=847)
      0.2 = coord(1/5)
    
    Abstract
    In an ongoing investigation of the relationship between Jason Farradane's relational indexing principles and concept combination in Web-based information on Alzheimer's Disease, the hyperlinks of three consumer health information websites are examined to see how well the linking relationships map to Farradane's relational operators, as well as to the linking attributes in HTML 5. The links were found to be largely bibliographic in nature, and as such mapped well onto HTML 5. Farradane's operators were less effective at capturing the individual links; nonetheless, the two dimensions of his relational matrix-association and discrimination-reveal a crucial underlying strategy of the emotionally-charged mediation between complex information and users who are consulting it under severe stress.
  10. Zhang, L.: Linking information through function (2014) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 1526) [ClassicSimilarity], result of:
          0.04526677 = score(doc=1526,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 1526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1526)
      0.2 = coord(1/5)
    
    Abstract
    How information resources can be meaningfully related has been addressed in contexts from bibliographic entries to hyperlinks and, more recently, linked data. The genre structure and relationships among genre structure constituents shed new light on organizing information by purpose or function. This study examines the relationships among a set of functional units previously constructed in a taxonomy, each of which is a chunk of information embedded in a document and is distinct in terms of its communicative function. Through a card-sort study, relationships among functional units were identified with regard to their occurrence and function. The findings suggest that a group of functional units can be identified, collocated, and navigated by particular relationships. Understanding how functional units are related to each other is significant in linking information pieces in documents to support finding, aggregating, and navigating information in a distributed information environment.
  11. Buizza, G.: Subject analysis and indexing : an "Italian version" of the analytico-synthetic model (2011) 0.01
    0.009053354 = product of:
      0.04526677 = sum of:
        0.04526677 = weight(_text_:bibliographic in 1812) [ClassicSimilarity], result of:
          0.04526677 = score(doc=1812,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.2580748 = fieldWeight in 1812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=1812)
      0.2 = coord(1/5)
    
    Series
    IFLA series on bibliographic control; vol. 42
  12. Broughton, V.: Language related problems in the construction of faceted terminologies and their automatic management (2008) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 2497) [ClassicSimilarity], result of:
          0.03772231 = score(doc=2497,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 2497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2497)
      0.2 = coord(1/5)
    
    Content
    The paper describes current work on the generation of a thesaurus format from the schedules of the Bliss Bibliographic Classification 2nd edition (BC2). The practical problems that occur in moving from a concept based approach to a terminological approach cluster around issues of vocabulary control that are not fully addressed in a systematic structure. These difficulties can be exacerbated within domains in the humanities because large numbers of culture specific terms may need to be accommodated in any thesaurus. The ways in which these problems can be resolved within the context of a semi-automated approach to the thesaurus generation have consequences for the management of classification data in the source vocabulary. The way in which the vocabulary is marked up for the purpose of machine manipulation is described, and some of the implications for editorial policy are discussed and examples given. The value of the classification notation as a language independent representation and mapping tool should not be sacrificed in such an exercise.
  13. Broughton, V.: Facet analysis as a tool for modelling subject domains and terminologies (2011) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 4826) [ClassicSimilarity], result of:
          0.03772231 = score(doc=4826,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
      0.2 = coord(1/5)
    
    Abstract
    Facet analysis is proposed as a general theory of knowledge organization, with an associated methodology that may be applied to the development of terminology tools in a variety of contexts and formats. Faceted classifications originated as a means of representing complexity in semantic content that facilitates logical organization and effective retrieval in a physical environment. This is achieved through meticulous analysis of concepts, their structural and functional status (based on fundamental categories), and their inter-relationships. These features provide an excellent basis for the general conceptual modelling of domains, and for the generation of KOS other than systematic classifications. This is demonstrated by the adoption of a faceted approach to many web search and visualization tools, and by the emergence of a facet based methodology for the construction of thesauri. Current work on the Bliss Bibliographic Classification (Second Edition) is investigating the ways in which the full complexity of faceted structures may be represented through encoded data, capable of generating intellectually and mechanically compatible forms of indexing tools from a single source. It is suggested that a number of research questions relating to the Semantic Web could be tackled through the medium of facet analysis.
  14. Román, J.H.; Hulin, K.J.; Collins, L.M.; Powell, J.E.: Entity disambiguation using semantic networks (2012) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 461) [ClassicSimilarity], result of:
          0.03772231 = score(doc=461,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=461)
      0.2 = coord(1/5)
    
    Abstract
    A major stumbling block preventing machines from understanding text is the problem of entity disambiguation. While humans find it easy to determine that a person named in one story is the same person referenced in a second story, machines rely heavily on crude heuristics such as string matching and stemming to make guesses as to whether nouns are coreferent. A key advantage that humans have over machines is the ability to mentally make connections between ideas and, based on these connections, reason how likely two entities are to be the same. Mirroring this natural thought process, we have created a prototype framework for disambiguating entities that is based on connectedness. In this article, we demonstrate it in the practical application of disambiguating authors across a large set of bibliographic records. By representing knowledge from the records as edges in a graph between a subject and an object, we believe that the problem of disambiguating entities reduces to the problem of discovering the most strongly connected nodes in a graph. The knowledge from the records comes in many different forms, such as names of people, date of publication, and themes extracted from the text of the abstract. These different types of knowledge are fused to create the graph required for disambiguation. Furthermore, the resulting graph and framework can be used for more complex operations.
  15. Wu, D.; Shi, J.: Classical music recording ontology used in a library catalog (2016) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 3179) [ClassicSimilarity], result of:
          0.03772231 = score(doc=3179,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 3179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
      0.2 = coord(1/5)
    
    Abstract
    In order to improve the organization of classical music information resources, we constructed a classical music recording ontology, on top of which we then designed an online classical music catalog. Our construction of the classical music recording ontology consisted of three steps: identifying the purpose, analyzing the ontology, and encoding the ontology. We identified the main classes and properties of the domain by investigating classical music recording resources and users' information needs. We implemented the ontology in the Web Ontology Language (OWL) using five steps: transforming the properties, encoding the transformed properties, defining ranges of the properties, constructing individuals, and standardizing the ontology. In constructing the online catalog, we first designed the structure and functions of the catalog based on investigations into users' information needs and information-seeking behaviors. Then we extracted classes and properties of the ontology using the Apache Jena application programming interface (API), and constructed a catalog in the Java environment. The catalog provides a hierarchical main page (built using the Functional Requirements for Bibliographic Records (FRBR) model), a classical music information network and integrated information service; this combination of features greatly eases the task of finding classical music recordings and more information about classical music.
  16. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 3437) [ClassicSimilarity], result of:
          0.03772231 = score(doc=3437,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
      0.2 = coord(1/5)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
  17. Branch, F.; Arias, T.; Kennah, J.; Phillips, R.; Windleharth, T.; Lee, J.H.: Representing transmedia fictional worlds through ontology (2017) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 3958) [ClassicSimilarity], result of:
          0.03772231 = score(doc=3958,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 3958, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3958)
      0.2 = coord(1/5)
    
    Abstract
    Currently, there is no structured data standard for representing elements commonly found in transmedia fictional worlds. Although there are websites dedicated to individual universes, the information found on these sites separate out the various formats, concentrate on only the bibliographic aspects of the material, and are only searchable with full text. We have created an ontological model that will allow various user groups interested in transmedia to search for and retrieve the information contained in these worlds based upon their structure. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using Ontology Web Language (OWL) and an artificial intelligence-reasoning engine. This model can infer connections between transmedia properties such as characters, elements of power, items, places, events, and so on. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is an OWL ontology reflecting real user needs based upon user research, which is intuitive for users and can be used by artificial intelligence systems.
  18. Campos, L.M.: Princípios teóricos usados na elaboracao de ontologias e sua influência na recuperacao da informacao com uso de de inferências [Theoretical principles used in ontology building and their influence on information retrieval using inferences] (2021) 0.01
    0.0075444616 = product of:
      0.03772231 = sum of:
        0.03772231 = weight(_text_:bibliographic in 826) [ClassicSimilarity], result of:
          0.03772231 = score(doc=826,freq=2.0), product of:
            0.17540175 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.045055166 = queryNorm
            0.21506234 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=826)
      0.2 = coord(1/5)
    
    Abstract
    Several instruments of knowledge organization will reflect different possibilities for information retrieval. In this context, ontologies have a different potential because they allow knowledge discovery, which can be used to retrieve information in a more flexible way. However, this potential can be affected by the theoretical principles adopted in ontology building. The aim of this paper is to discuss, in an introductory way, how a (not exhaustive) set of theoretical principles can influence an aspect of ontologies: their use to obtain inferences. In this context, the role of Ingetraut Dahlberg's Theory of Concept is discussed. The methodology is exploratory, qualitative, and from the technical point of view it uses bibliographic research supported by the content analysis method. It also presents a small example of application as a proof of concept. As results, a discussion about the influence of conceptual definition on subsumption inferences is presented, theoretical contributions are suggested that should be used to guide the formation of hierarchical structures on which such inferences are supported, and examples are provided of how the absence of such contributions can lead to erroneous inferences
  19. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.01
    0.0070547215 = product of:
      0.035273608 = sum of:
        0.035273608 = product of:
          0.070547216 = sum of:
            0.070547216 = weight(_text_:searching in 99) [ClassicSimilarity], result of:
              0.070547216 = score(doc=99,freq=6.0), product of:
                0.18226127 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.045055166 = queryNorm
                0.38706642 = fieldWeight in 99, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
  20. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.0061043533 = product of:
      0.030521767 = sum of:
        0.030521767 = product of:
          0.061043534 = sum of:
            0.061043534 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.061043534 = score(doc=6089,freq=2.0), product of:
                0.15777552 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045055166 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Pages
    S.11-22

Authors

Years

Languages

  • e 62
  • d 8
  • pt 1
  • More… Less…