Search (13 results, page 1 of 1)

  • × type_ss:"p"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.34
    0.33543858 = product of:
      0.78269 = sum of:
        0.054338597 = product of:
          0.16301578 = sum of:
            0.16301578 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.16301578 = score(doc=862,freq=2.0), product of:
                0.29005435 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03421255 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.16301578 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16301578 = score(doc=862,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.16301578 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16301578 = score(doc=862,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.07628826 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
          0.07628826 = score(doc=862,freq=2.0), product of:
            0.19842365 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.03421255 = queryNorm
            0.3844716 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.16301578 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16301578 = score(doc=862,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.16301578 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16301578 = score(doc=862,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.42857143 = coord(6/14)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.03
    0.0329498 = product of:
      0.115324296 = sum of:
        0.03350689 = weight(_text_:world in 640) [ClassicSimilarity], result of:
          0.03350689 = score(doc=640,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.02415526 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.02415526 = score(doc=640,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.03350689 = weight(_text_:world in 640) [ClassicSimilarity], result of:
          0.03350689 = score(doc=640,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.02415526 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.02415526 = score(doc=640,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
      0.2857143 = coord(4/14)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  3. Panzer, M.: Dewey Web services : overview (2009) 0.02
    0.02300501 = product of:
      0.16103506 = sum of:
        0.08051753 = weight(_text_:web in 7190) [ClassicSimilarity], result of:
          0.08051753 = score(doc=7190,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.72114074 = fieldWeight in 7190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.15625 = fieldNorm(doc=7190)
        0.08051753 = weight(_text_:web in 7190) [ClassicSimilarity], result of:
          0.08051753 = score(doc=7190,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.72114074 = fieldWeight in 7190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.15625 = fieldNorm(doc=7190)
      0.14285715 = coord(2/14)
    
  4. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.01
    0.01199371 = product of:
      0.083955966 = sum of:
        0.041977983 = weight(_text_:wide in 851) [ClassicSimilarity], result of:
          0.041977983 = score(doc=851,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.2769224 = fieldWeight in 851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
        0.041977983 = weight(_text_:wide in 851) [ClassicSimilarity], result of:
          0.041977983 = score(doc=851,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.2769224 = fieldWeight in 851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=851)
      0.14285715 = coord(2/14)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  5. Lange, C.; Ion, P.; Dimou, A.; Bratsas, C.; Sperber, W.; Kohlhasel, M.; Antoniou, I.: Getting mathematics towards the Web of Data : the case of the Mathematics Subject Classification (2012) 0.01
    0.009961462 = product of:
      0.06973023 = sum of:
        0.034865115 = weight(_text_:web in 111) [ClassicSimilarity], result of:
          0.034865115 = score(doc=111,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 111, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=111)
        0.034865115 = weight(_text_:web in 111) [ClassicSimilarity], result of:
          0.034865115 = score(doc=111,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 111, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=111)
      0.14285715 = coord(2/14)
    
    Abstract
    The Mathematics Subject Classification (MSC), maintained by the American Mathematical Society's Mathematical Reviews (MR) and FIZ Karlsruhe's Zentralblatt für Mathematik (Zbl), is a scheme for classifying publications in mathematics according to their subjects. While it is widely used, its traditional, idiosyncratic conceptualization and representation requires custom implementations of search, query and annotation support. This did not encourage people to create and explore connections of mathematics to subjects of related domains (e.g. science), and it made the scheme hard to maintain. We have reimplemented the current version of MSC2010 as a Linked Open Dataset using SKOS and our focus is concentrated on turning it into the new MSC authority. This paper explains the motivation, and details of our design considerations and how we realized them in the implementation. We present in-the-field use cases and point out how e-science applications can take advantage of the MSC LOD set. We conclude with a roadmap for bootstrapping the presence of mathematical and mathematics-based science, technology, and engineering knowledge on the Web of Data, where it has been noticeably underrepresented so far, starting from MSC/SKOS as a seed.
    Footnote
    Vgl. auch den publizierten Beitrag u.d.T.: Bringing mathematics towards the Web of Data: the case of the Mathematics Subject Classification
  6. Lehmann, F.: Semiosis complicates high-level ontology (2000) 0.01
    0.009573397 = product of:
      0.06701378 = sum of:
        0.03350689 = weight(_text_:world in 5087) [ClassicSimilarity], result of:
          0.03350689 = score(doc=5087,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 5087, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=5087)
        0.03350689 = weight(_text_:world in 5087) [ClassicSimilarity], result of:
          0.03350689 = score(doc=5087,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 5087, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=5087)
      0.14285715 = coord(2/14)
    
    Abstract
    For automated question-answering, natural-language understanding, semantic integration of different databases/standards/thesauri/etc., you need a big complicated ontology of concepts and a logical language to combine them. Cyc (www.cyc.com) is such a system. It's good for your upper ontology to be systematic and clear, One way is to have a small number of well-defined distinctions at the top, by which all more specific concepts are partitioned. This is a system of "factors", or "facets" in Ranganathan's sense Iyer 1995) much like Aristotle's "differentia" in his "categories", as promoted in John Sowa's "ontological crystal". Practical considerations have driven Cyc's builders to mess up the neatness of such upper divisions. In particular, the simplicity of some very high "factors" is confounded, for practical use, by the occurrence in our world of semiosis and representation This talk will report on some of our experiences
  7. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.01
    0.007977831 = product of:
      0.055844814 = sum of:
        0.027922407 = weight(_text_:world in 849) [ClassicSimilarity], result of:
          0.027922407 = score(doc=849,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=849)
        0.027922407 = weight(_text_:world in 849) [ClassicSimilarity], result of:
          0.027922407 = score(doc=849,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=849)
      0.14285715 = coord(2/14)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  8. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.01
    0.006901503 = product of:
      0.04831052 = sum of:
        0.02415526 = weight(_text_:web in 39) [ClassicSimilarity], result of:
          0.02415526 = score(doc=39,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
        0.02415526 = weight(_text_:web in 39) [ClassicSimilarity], result of:
          0.02415526 = score(doc=39,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
      0.14285715 = coord(2/14)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  9. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.0059961556 = product of:
      0.083946176 = sum of:
        0.083946176 = weight(_text_:analyse in 2568) [ClassicSimilarity], result of:
          0.083946176 = score(doc=2568,freq=2.0), product of:
            0.18025847 = queryWeight, product of:
              5.268782 = idf(docFreq=618, maxDocs=44218)
              0.03421255 = queryNorm
            0.46569893 = fieldWeight in 2568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.268782 = idf(docFreq=618, maxDocs=44218)
              0.0625 = fieldNorm(doc=2568)
      0.071428575 = coord(1/14)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
  10. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.01
    0.0057512526 = product of:
      0.040258765 = sum of:
        0.020129383 = weight(_text_:web in 5787) [ClassicSimilarity], result of:
          0.020129383 = score(doc=5787,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
        0.020129383 = weight(_text_:web in 5787) [ClassicSimilarity], result of:
          0.020129383 = score(doc=5787,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
      0.14285715 = coord(2/14)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  11. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.00
    0.0028094335 = product of:
      0.039332066 = sum of:
        0.039332066 = product of:
          0.07866413 = sum of:
            0.07866413 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.07866413 = score(doc=3227,freq=4.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  12. Wätjen, H.-J.: Mensch oder Maschine? : Auswahl und Erschließung vonm Informationsressourcen im Internet (1996) 0.00
    0.0016554744 = product of:
      0.02317664 = sum of:
        0.02317664 = product of:
          0.04635328 = sum of:
            0.04635328 = weight(_text_:22 in 3161) [ClassicSimilarity], result of:
              0.04635328 = score(doc=3161,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.38690117 = fieldWeight in 3161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3161)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    2. 2.1996 15:40:22
  13. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.00
    8.277372E-4 = product of:
      0.01158832 = sum of:
        0.01158832 = product of:
          0.02317664 = sum of:
            0.02317664 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.02317664 = score(doc=1171,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    23.11.2023 19:07:22