Search (7 results, page 1 of 1)

  • × type_ss:"p"
  • × type_ss:"el"
  1. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.02
    0.015266454 = product of:
      0.053432588 = sum of:
        0.023676997 = weight(_text_:systems in 851) [ClassicSimilarity], result of:
          0.023676997 = score(doc=851,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.19207339 = fieldWeight in 851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
        0.02975559 = product of:
          0.05951118 = sum of:
            0.05951118 = weight(_text_:applications in 851) [ClassicSimilarity], result of:
              0.05951118 = score(doc=851,freq=6.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.33699697 = fieldWeight in 851, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  2. Wilk, D.: Problems in the use of Library of Congress Subject Headings as the basis for Hebrew subject headings in the Bar-Ilan University Library (2000) 0.01
    0.0061901403 = product of:
      0.04333098 = sum of:
        0.04333098 = weight(_text_:library in 5416) [ClassicSimilarity], result of:
          0.04333098 = score(doc=5416,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.4108404 = fieldWeight in 5416, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=5416)
      0.14285715 = coord(1/7)
    
  3. Elazar, D.H.: ¬The making of a classification scheme for libraries of Judaica (2000) 0.00
    0.0047005103 = product of:
      0.03290357 = sum of:
        0.03290357 = product of:
          0.06580714 = sum of:
            0.06580714 = weight(_text_:29 in 5400) [ClassicSimilarity], result of:
              0.06580714 = score(doc=5400,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.46638384 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5400)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1.11.2000 14:55:29
  4. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.00
    0.0023502551 = product of:
      0.016451785 = sum of:
        0.016451785 = product of:
          0.03290357 = sum of:
            0.03290357 = weight(_text_:29 in 1055) [ClassicSimilarity], result of:
              0.03290357 = score(doc=1055,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23319192 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1055)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    12. 9.2013 12:29:05
  5. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.00
    0.0023502551 = product of:
      0.016451785 = sum of:
        0.016451785 = product of:
          0.03290357 = sum of:
            0.03290357 = weight(_text_:29 in 39) [ClassicSimilarity], result of:
              0.03290357 = score(doc=39,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23319192 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    17.11.2020 11:29:00
  6. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    0.0013178664 = product of:
      0.009225064 = sum of:
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 5365) [ClassicSimilarity], result of:
              0.018450128 = score(doc=5365,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  7. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    0.0013178664 = product of:
      0.009225064 = sum of:
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 976) [ClassicSimilarity], result of:
              0.018450128 = score(doc=976,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=976)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.