Search (43 results, page 2 of 3)

  • × type_ss:"p"
  1. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.00
    5.0667033E-5 = product of:
      0.0015200109 = sum of:
        0.0015200109 = product of:
          0.0045600324 = sum of:
            0.0045600324 = weight(_text_:a in 5787) [ClassicSimilarity], result of:
              0.0045600324 = score(doc=5787,freq=24.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.22065444 = fieldWeight in 5787, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5787)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  2. Yitzhaki, M.: ¬A draft version of a consolidated thesaurus for the rapidly growing field of alternative medicine (2000) 0.00
    4.9643342E-5 = product of:
      0.0014893002 = sum of:
        0.0014893002 = product of:
          0.0044679004 = sum of:
            0.0044679004 = weight(_text_:a in 5417) [ClassicSimilarity], result of:
              0.0044679004 = score(doc=5417,freq=4.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.2161963 = fieldWeight in 5417, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5417)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  3. Hausser, R.: Language and nonlanguage cognition (2021) 0.00
    4.9643342E-5 = product of:
      0.0014893002 = sum of:
        0.0014893002 = product of:
          0.0044679004 = sum of:
            0.0044679004 = weight(_text_:a in 255) [ClassicSimilarity], result of:
              0.0044679004 = score(doc=255,freq=16.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.2161963 = fieldWeight in 255, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  4. Bhattacharyya, G.: Classifying by UDC and CC: a comparative study (1972) 0.00
    4.6804194E-5 = product of:
      0.0014041257 = sum of:
        0.0014041257 = product of:
          0.004212377 = sum of:
            0.004212377 = weight(_text_:a in 1923) [ClassicSimilarity], result of:
              0.004212377 = score(doc=1923,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.20383182 = fieldWeight in 1923, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=1923)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  5. Satija, M.P.: Classification and indexing in India : a state-of-the-art (1992) 0.00
    4.6804194E-5 = product of:
      0.0014041257 = sum of:
        0.0014041257 = product of:
          0.004212377 = sum of:
            0.004212377 = weight(_text_:a in 1539) [ClassicSimilarity], result of:
              0.004212377 = score(doc=1539,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.20383182 = fieldWeight in 1539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=1539)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  6. Green, R.: Relationships in the Dewey Decimal Classification (DDC) : plan of study (2008) 0.00
    4.6804194E-5 = product of:
      0.0014041257 = sum of:
        0.0014041257 = product of:
          0.004212377 = sum of:
            0.004212377 = weight(_text_:a in 3397) [ClassicSimilarity], result of:
              0.004212377 = score(doc=3397,freq=8.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.20383182 = fieldWeight in 3397, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3397)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    EPC Exhibit 129-36.1 presented intermediate results of a project to connect Relative Index terms to topics associated with classes and to determine if those Relative Index terms approximated the whole of the corresponding class or were in standing room in the class. The Relative Index project constitutes the first stage of a long(er)-term project to instill a more systematic treatment of relationships within the DDC. The present exhibit sets out a plan of study for that long-term project.
  7. Lund, B.D.: ¬A brief review of ChatGPT : its value and the underlying GPT technology (2023) 0.00
    4.2992393E-5 = product of:
      0.0012897718 = sum of:
        0.0012897718 = product of:
          0.0038693151 = sum of:
            0.0038693151 = weight(_text_:a in 873) [ClassicSimilarity], result of:
              0.0038693151 = score(doc=873,freq=12.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.18723148 = fieldWeight in 873, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=873)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    In this review paper, ChatGPT, a public tool developed by OpenAI that utilizes GPT technology to fulfill a range of text-based requests is examined. ChatGPT is a sophisticated chatbot capable of understanding and interpreting user requests, generating appropriate responses in nearly natural human language, and completing advanced tasks such as writing thank you letters and addressing productivity issues. The details of how ChatGPT works, as well as the potential impacts of this technology on various industries, are discussed. The concept of Generative Pre-Trained Transformer (GPT), the language model on which ChatGPT is based, is also explored, as well as the process of unsupervised pretraining and supervised fine-tuning that is used to refine the GPT algorithm. A letter written by ChatGPT to a colleague from Iran is presented as an example of the chatbot's capabilities.
  8. Seymour, C.: ¬A time to build : Israeli cataloging in transition (2000) 0.00
    4.095367E-5 = product of:
      0.00122861 = sum of:
        0.00122861 = product of:
          0.00368583 = sum of:
            0.00368583 = weight(_text_:a in 5412) [ClassicSimilarity], result of:
              0.00368583 = score(doc=5412,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.17835285 = fieldWeight in 5412, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5412)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  9. Oberhauser, O.; Labner, J.: Einführung der automatischen Indexierung im Österreichischen Verbundkatalog? : Bericht über eine empirische Studie (2003) 0.00
    4.095367E-5 = product of:
      0.00122861 = sum of:
        0.00122861 = product of:
          0.00368583 = sum of:
            0.00368583 = weight(_text_:a in 1878) [ClassicSimilarity], result of:
              0.00368583 = score(doc=1878,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.17835285 = fieldWeight in 1878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1878)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Location
    A
  10. Lehmann, F.: Semiosis complicates high-level ontology (2000) 0.00
    3.924651E-5 = product of:
      0.0011773953 = sum of:
        0.0011773953 = product of:
          0.003532186 = sum of:
            0.003532186 = weight(_text_:a in 5087) [ClassicSimilarity], result of:
              0.003532186 = score(doc=5087,freq=10.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.1709182 = fieldWeight in 5087, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5087)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    For automated question-answering, natural-language understanding, semantic integration of different databases/standards/thesauri/etc., you need a big complicated ontology of concepts and a logical language to combine them. Cyc (www.cyc.com) is such a system. It's good for your upper ontology to be systematic and clear, One way is to have a small number of well-defined distinctions at the top, by which all more specific concepts are partitioned. This is a system of "factors", or "facets" in Ranganathan's sense Iyer 1995) much like Aristotle's "differentia" in his "categories", as promoted in John Sowa's "ontological crystal". Practical considerations have driven Cyc's builders to mess up the neatness of such upper divisions. In particular, the simplicity of some very high "factors" is confounded, for practical use, by the occurrence in our world of semiosis and representation This talk will report on some of our experiences
  11. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    3.924651E-5 = product of:
      0.0011773953 = sum of:
        0.0011773953 = product of:
          0.003532186 = sum of:
            0.003532186 = weight(_text_:a in 5365) [ClassicSimilarity], result of:
              0.003532186 = score(doc=5365,freq=10.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.1709182 = fieldWeight in 5365, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  12. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.00
    3.7001962E-5 = product of:
      0.0011100589 = sum of:
        0.0011100589 = product of:
          0.0033301765 = sum of:
            0.0033301765 = weight(_text_:a in 851) [ClassicSimilarity], result of:
              0.0033301765 = score(doc=851,freq=20.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.16114321 = fieldWeight in 851, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  13. Couture-Lafleur, R.: ¬The French translation of the Dewey Decimal Classification : The making of a DDC translation (1998) 0.00
    3.510315E-5 = product of:
      0.0010530944 = sum of:
        0.0010530944 = product of:
          0.003159283 = sum of:
            0.003159283 = weight(_text_:a in 3481) [ClassicSimilarity], result of:
              0.003159283 = score(doc=3481,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 3481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3481)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  14. Balikova, M.: ¬The national bibliography of a small country in international context (2000) 0.00
    3.510315E-5 = product of:
      0.0010530944 = sum of:
        0.0010530944 = product of:
          0.003159283 = sum of:
            0.003159283 = weight(_text_:a in 5397) [ClassicSimilarity], result of:
              0.003159283 = score(doc=5397,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 5397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5397)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  15. Broughton, V.: ¬A new classification for the literature for religion (2000) 0.00
    3.510315E-5 = product of:
      0.0010530944 = sum of:
        0.0010530944 = product of:
          0.003159283 = sum of:
            0.003159283 = weight(_text_:a in 5398) [ClassicSimilarity], result of:
              0.003159283 = score(doc=5398,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 5398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5398)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  16. Stoklasova, B.: ¬The national bibliography of a small country in international context (2000) 0.00
    3.510315E-5 = product of:
      0.0010530944 = sum of:
        0.0010530944 = product of:
          0.003159283 = sum of:
            0.003159283 = weight(_text_:a in 5415) [ClassicSimilarity], result of:
              0.003159283 = score(doc=5415,freq=2.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.15287387 = fieldWeight in 5415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5415)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  17. Kemp, A. de: Information provision : a publisher's point of view in changing times and with new technologies (1993) 0.00
    3.3095566E-5 = product of:
      9.928669E-4 = sum of:
        9.928669E-4 = product of:
          0.0029786006 = sum of:
            0.0029786006 = weight(_text_:a in 6235) [ClassicSimilarity], result of:
              0.0029786006 = score(doc=6235,freq=4.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.14413087 = fieldWeight in 6235, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6235)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
  18. Lange, C.; Ion, P.; Dimou, A.; Bratsas, C.; Sperber, W.; Kohlhasel, M.; Antoniou, I.: Getting mathematics towards the Web of Data : the case of the Mathematics Subject Classification (2012) 0.00
    3.2705426E-5 = product of:
      9.811628E-4 = sum of:
        9.811628E-4 = product of:
          0.002943488 = sum of:
            0.002943488 = weight(_text_:a in 111) [ClassicSimilarity], result of:
              0.002943488 = score(doc=111,freq=10.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.14243183 = fieldWeight in 111, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=111)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    The Mathematics Subject Classification (MSC), maintained by the American Mathematical Society's Mathematical Reviews (MR) and FIZ Karlsruhe's Zentralblatt für Mathematik (Zbl), is a scheme for classifying publications in mathematics according to their subjects. While it is widely used, its traditional, idiosyncratic conceptualization and representation requires custom implementations of search, query and annotation support. This did not encourage people to create and explore connections of mathematics to subjects of related domains (e.g. science), and it made the scheme hard to maintain. We have reimplemented the current version of MSC2010 as a Linked Open Dataset using SKOS and our focus is concentrated on turning it into the new MSC authority. This paper explains the motivation, and details of our design considerations and how we realized them in the implementation. We present in-the-field use cases and point out how e-science applications can take advantage of the MSC LOD set. We conclude with a roadmap for bootstrapping the presence of mathematical and mathematics-based science, technology, and engineering knowledge on the Web of Data, where it has been noticeably underrepresented so far, starting from MSC/SKOS as a seed.
  19. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    3.0400215E-5 = product of:
      9.120064E-4 = sum of:
        9.120064E-4 = product of:
          0.0027360192 = sum of:
            0.0027360192 = weight(_text_:a in 976) [ClassicSimilarity], result of:
              0.0027360192 = score(doc=976,freq=6.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.13239266 = fieldWeight in 976, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=976)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
    Type
    a
  20. Pejtersen, A.M.; Jensen, H.; Speck, P.; Villumsen, S.; Weber, S.: Catalogs for children : the Book House project on visualization of database retrieval and classification (1993) 0.00
    2.9252618E-5 = product of:
      8.775785E-4 = sum of:
        8.775785E-4 = product of:
          0.0026327355 = sum of:
            0.0026327355 = weight(_text_:a in 6232) [ClassicSimilarity], result of:
              0.0026327355 = score(doc=6232,freq=8.0), product of:
                0.020665944 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.017922899 = queryNorm
                0.12739488 = fieldWeight in 6232, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6232)
          0.33333334 = coord(1/3)
      0.033333335 = coord(1/30)
    
    Abstract
    This paper describes the Book House system which is designed to support children's information retrieval in libraries as part of their education. It is a shareware program available on CD-ROM and discs, and comprises functionality for database searching as well as for the classification and storage of book information in the database. The system concept is based on an understanding of children's domain structures and their capabilities for categorization of information needs in connection with their activities in public libraries, in school libraries or in schools. These structures are visualized in the interface by using metaphors and multimedia technology. Through the use of text, images and animation, the Book House supports children - even at a very early age - to learn by doing in an enjoyable way which plays on their previous experiences with computer games. Both words and pictures can be used for searching; this makes the system suitable for all age groups. Even children who have not yet learned to read properly can by selecting pictures search for and find books they would like to have read aloud. Thus at the very beginning of their school period, they can learn to search for books on their own. For the library community itself, such a system will provide an extended service which will increase the number of children's own searches and also improve the relevance, quality and utilization of the collections in the libraries. A market research on the need for an annual indexing service for books in the Book House format is in preparation by the Danish Library Center

Years

Languages

  • e 35
  • d 8

Types