Search (11 results, page 1 of 1)

  • × type_ss:"p"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.10862944 = sum of:
      0.082218945 = product of:
        0.24665684 = sum of:
          0.24665684 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.24665684 = score(doc=862,freq=2.0), product of:
              0.43887708 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051766515 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
      0.0264105 = product of:
        0.052821 = sum of:
          0.052821 = weight(_text_:language in 862) [ClassicSimilarity], result of:
            0.052821 = score(doc=862,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.26008 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.06
    0.061551623 = product of:
      0.123103246 = sum of:
        0.123103246 = sum of:
          0.088035 = weight(_text_:language in 1171) [ClassicSimilarity], result of:
            0.088035 = score(doc=1171,freq=8.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.4334667 = fieldWeight in 1171, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.03506824 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.03506824 = score(doc=1171,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.19345059 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
      0.5 = coord(1/2)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  3. Hausser, R.: Language and nonlanguage cognition (2021) 0.03
    0.03493781 = product of:
      0.06987562 = sum of:
        0.06987562 = product of:
          0.13975124 = sum of:
            0.13975124 = weight(_text_:language in 255) [ClassicSimilarity], result of:
              0.13975124 = score(doc=255,freq=14.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6881071 = fieldWeight in 255, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  4. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.03
    0.02975639 = product of:
      0.05951278 = sum of:
        0.05951278 = product of:
          0.11902556 = sum of:
            0.11902556 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.11902556 = score(doc=3227,freq=4.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  5. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.02
    0.019685227 = product of:
      0.039370455 = sum of:
        0.039370455 = product of:
          0.07874091 = sum of:
            0.07874091 = weight(_text_:language in 851) [ClassicSimilarity], result of:
              0.07874091 = score(doc=851,freq=10.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.38770443 = fieldWeight in 851, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  6. Lehmann, F.: Semiosis complicates high-level ontology (2000) 0.02
    0.018675046 = product of:
      0.037350092 = sum of:
        0.037350092 = product of:
          0.074700184 = sum of:
            0.074700184 = weight(_text_:language in 5087) [ClassicSimilarity], result of:
              0.074700184 = score(doc=5087,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.3678087 = fieldWeight in 5087, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5087)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    For automated question-answering, natural-language understanding, semantic integration of different databases/standards/thesauri/etc., you need a big complicated ontology of concepts and a logical language to combine them. Cyc (www.cyc.com) is such a system. It's good for your upper ontology to be systematic and clear, One way is to have a small number of well-defined distinctions at the top, by which all more specific concepts are partitioned. This is a system of "factors", or "facets" in Ranganathan's sense Iyer 1995) much like Aristotle's "differentia" in his "categories", as promoted in John Sowa's "ontological crystal". Practical considerations have driven Cyc's builders to mess up the neatness of such upper divisions. In particular, the simplicity of some very high "factors" is confounded, for practical use, by the occurrence in our world of semiosis and representation This talk will report on some of our experiences
  7. Lund, B.D.: ¬A brief review of ChatGPT : its value and the underlying GPT technology (2023) 0.02
    0.018675046 = product of:
      0.037350092 = sum of:
        0.037350092 = product of:
          0.074700184 = sum of:
            0.074700184 = weight(_text_:language in 873) [ClassicSimilarity], result of:
              0.074700184 = score(doc=873,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.3678087 = fieldWeight in 873, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=873)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this review paper, ChatGPT, a public tool developed by OpenAI that utilizes GPT technology to fulfill a range of text-based requests is examined. ChatGPT is a sophisticated chatbot capable of understanding and interpreting user requests, generating appropriate responses in nearly natural human language, and completing advanced tasks such as writing thank you letters and addressing productivity issues. The details of how ChatGPT works, as well as the potential impacts of this technology on various industries, are discussed. The concept of Generative Pre-Trained Transformer (GPT), the language model on which ChatGPT is based, is also explored, as well as the process of unsupervised pretraining and supervised fine-tuning that is used to refine the GPT algorithm. A letter written by ChatGPT to a colleague from Iran is presented as an example of the chatbot's capabilities.
  8. Wätjen, H.-J.: Mensch oder Maschine? : Auswahl und Erschließung vonm Informationsressourcen im Internet (1996) 0.02
    0.01753412 = product of:
      0.03506824 = sum of:
        0.03506824 = product of:
          0.07013648 = sum of:
            0.07013648 = weight(_text_:22 in 3161) [ClassicSimilarity], result of:
              0.07013648 = score(doc=3161,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.38690117 = fieldWeight in 3161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 2.1996 15:40:22
  9. Slavic, A.: Interface to classification : some objectives and options (2006) 0.01
    0.01320525 = product of:
      0.0264105 = sum of:
        0.0264105 = product of:
          0.052821 = sum of:
            0.052821 = weight(_text_:language in 2131) [ClassicSimilarity], result of:
              0.052821 = score(doc=2131,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.26008 = fieldWeight in 2131, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  10. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.01
    0.01320525 = product of:
      0.0264105 = sum of:
        0.0264105 = product of:
          0.052821 = sum of:
            0.052821 = weight(_text_:language in 976) [ClassicSimilarity], result of:
              0.052821 = score(doc=976,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.26008 = fieldWeight in 976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=976)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
  11. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.01
    0.011004375 = product of:
      0.02200875 = sum of:
        0.02200875 = product of:
          0.0440175 = sum of:
            0.0440175 = weight(_text_:language in 5787) [ClassicSimilarity], result of:
              0.0440175 = score(doc=5787,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21673335 = fieldWeight in 5787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5787)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.