Search (43 results, page 1 of 3)

  • × type_ss:"p"
  1. Wätjen, H.-J.: Mensch oder Maschine? : Auswahl und Erschließung vonm Informationsressourcen im Internet (1996) 0.03
    0.02850543 = product of:
      0.05701086 = sum of:
        0.05701086 = product of:
          0.08551629 = sum of:
            0.03861543 = weight(_text_:j in 3161) [ClassicSimilarity], result of:
              0.03861543 = score(doc=3161,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.35106707 = fieldWeight in 3161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3161)
            0.046900857 = weight(_text_:22 in 3161) [ClassicSimilarity], result of:
              0.046900857 = score(doc=3161,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.38690117 = fieldWeight in 3161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3161)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    2. 2.1996 15:40:22
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.025837123 = sum of:
      0.023563074 = product of:
        0.1649415 = sum of:
          0.1649415 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.1649415 = score(doc=862,freq=2.0), product of:
              0.2934808 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.034616705 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.14285715 = coord(1/7)
      0.0022740487 = product of:
        0.006822146 = sum of:
          0.006822146 = weight(_text_:a in 862) [ClassicSimilarity], result of:
            0.006822146 = score(doc=862,freq=10.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.1709182 = fieldWeight in 862, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  3. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.02
    0.023921534 = product of:
      0.04784307 = sum of:
        0.04784307 = sum of:
          0.019307716 = weight(_text_:j in 1171) [ClassicSimilarity], result of:
            0.019307716 = score(doc=1171,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.17553353 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.0050849267 = weight(_text_:a in 1171) [ClassicSimilarity], result of:
            0.0050849267 = score(doc=1171,freq=8.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.12739488 = fieldWeight in 1171, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.023450429 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.023450429 = score(doc=1171,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.19345059 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
      0.5 = coord(1/2)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  4. Oberhauser, O.; Labner, J.: Einführung der automatischen Indexierung im Österreichischen Verbundkatalog? : Bericht über eine empirische Studie (2003) 0.02
    0.020393502 = product of:
      0.040787004 = sum of:
        0.040787004 = product of:
          0.061180506 = sum of:
            0.054061607 = weight(_text_:j in 1878) [ClassicSimilarity], result of:
              0.054061607 = score(doc=1878,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4914939 = fieldWeight in 1878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1878)
            0.007118898 = weight(_text_:a in 1878) [ClassicSimilarity], result of:
              0.007118898 = score(doc=1878,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.17835285 = fieldWeight in 1878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1878)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Location
    A
  5. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.01
    0.013265567 = product of:
      0.026531134 = sum of:
        0.026531134 = product of:
          0.0795934 = sum of:
            0.0795934 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.0795934 = score(doc=3227,freq=4.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  6. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.009997136 = product of:
      0.019994272 = sum of:
        0.019994272 = product of:
          0.029991407 = sum of:
            0.02316926 = weight(_text_:j in 5365) [ClassicSimilarity], result of:
              0.02316926 = score(doc=5365,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.21064025 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
            0.006822146 = weight(_text_:a in 5365) [ClassicSimilarity], result of:
              0.006822146 = score(doc=5365,freq=10.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.1709182 = fieldWeight in 5365, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  7. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.01
    0.009757059 = product of:
      0.019514117 = sum of:
        0.019514117 = product of:
          0.029271174 = sum of:
            0.02316926 = weight(_text_:j in 640) [ClassicSimilarity], result of:
              0.02316926 = score(doc=640,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.21064025 = fieldWeight in 640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
            0.006101913 = weight(_text_:a in 640) [ClassicSimilarity], result of:
              0.006101913 = score(doc=640,freq=8.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.15287387 = fieldWeight in 640, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  8. Grötschel, M.; Lügger, J.; Sperber, W.: Wissenschaftliches Publizieren und elektronische Fachinformation im Umbruch : ein Situationsbericht aus der Sicht der Mathematik (1993) 0.01
    0.009010268 = product of:
      0.018020537 = sum of:
        0.018020537 = product of:
          0.054061607 = sum of:
            0.054061607 = weight(_text_:j in 1946) [ClassicSimilarity], result of:
              0.054061607 = score(doc=1946,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4914939 = fieldWeight in 1946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1946)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. Goldberg, J.: Classification of religion in LCC (2000) 0.01
    0.009010268 = product of:
      0.018020537 = sum of:
        0.018020537 = product of:
          0.054061607 = sum of:
            0.054061607 = weight(_text_:j in 5402) [ClassicSimilarity], result of:
              0.054061607 = score(doc=5402,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4914939 = fieldWeight in 5402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5402)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Beall, J.: French Dewey : Potential influence on development of the DDC (1998) 0.01
    0.007723087 = product of:
      0.015446174 = sum of:
        0.015446174 = product of:
          0.04633852 = sum of:
            0.04633852 = weight(_text_:j in 3482) [ClassicSimilarity], result of:
              0.04633852 = score(doc=3482,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4212805 = fieldWeight in 3482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3482)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  11. McIlwaine, J.: Bibliographical control : self-instruction from individualised investigations (2000) 0.01
    0.007723087 = product of:
      0.015446174 = sum of:
        0.015446174 = product of:
          0.04633852 = sum of:
            0.04633852 = weight(_text_:j in 5410) [ClassicSimilarity], result of:
              0.04633852 = score(doc=5410,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4212805 = fieldWeight in 5410, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5410)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Dietze, J.: Sachkatalogisierung in einem OPAC (1993) 0.00
    0.004505134 = product of:
      0.009010268 = sum of:
        0.009010268 = product of:
          0.027030803 = sum of:
            0.027030803 = weight(_text_:j in 7388) [ClassicSimilarity], result of:
              0.027030803 = score(doc=7388,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.24574696 = fieldWeight in 7388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.00
    0.0038615435 = product of:
      0.007723087 = sum of:
        0.007723087 = product of:
          0.02316926 = sum of:
            0.02316926 = weight(_text_:j in 1055) [ClassicSimilarity], result of:
              0.02316926 = score(doc=1055,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.21064025 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1055)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://at.yorku.ca/c/b/f/j/99.htm
  14. Stephan, W.: Guidelines for subject authority and reference entries (GSARE) : a first step to a worldwide accepted standard (1992) 0.00
    0.0016779405 = product of:
      0.003355881 = sum of:
        0.003355881 = product of:
          0.010067643 = sum of:
            0.010067643 = weight(_text_:a in 2609) [ClassicSimilarity], result of:
              0.010067643 = score(doc=2609,freq=4.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.25222903 = fieldWeight in 2609, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2609)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0016079952 = product of:
      0.0032159905 = sum of:
        0.0032159905 = product of:
          0.009647971 = sum of:
            0.009647971 = weight(_text_:a in 2131) [ClassicSimilarity], result of:
              0.009647971 = score(doc=2131,freq=20.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.24171482 = fieldWeight in 2131, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  16. Jaenecke, P.: Knowledge organization due to theory formation (1995) 0.00
    0.0015160325 = product of:
      0.003032065 = sum of:
        0.003032065 = product of:
          0.009096195 = sum of:
            0.009096195 = weight(_text_:a in 3751) [ClassicSimilarity], result of:
              0.009096195 = score(doc=3751,freq=10.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.22789092 = fieldWeight in 3751, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3751)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Theory formation is regarded as a process of domain-internal knowledge organization. Misunderstandings about the concept 'theory' are explained. A theory is considered as a systematical representation of a domain realized by three closely related theory-forming actions: establishment of a suitable system of basic concepts, ordering of the experience or given experimental results, synthesizing of conflicting hypotheses. In this view, theory formation means an ambitious kind of knowledge representation. Its consequences are summarized and its importance for the human sciences and for society is emphasized
  17. Lund, B.D.: ¬A chat with ChatGPT : how will AI impact scholarly publishing? (2022) 0.00
    0.0015160325 = product of:
      0.003032065 = sum of:
        0.003032065 = product of:
          0.009096195 = sum of:
            0.009096195 = weight(_text_:a in 850) [ClassicSimilarity], result of:
              0.009096195 = score(doc=850,freq=10.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.22789092 = fieldWeight in 850, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=850)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This is a short project that serves as an inspiration for a forthcoming paper, which will explore the technical side of ChatGPT and the ethical issues it presents for academic researchers, which will result in a peer-reviewed publication. This demonstrates that capacities of ChatGPT as a "chatbot" that is far more advanced than many alternatives available today and may even be able to be used to draft entire academic manuscripts for researchers. ChatGPT is available via https://chat.openai.com/chat.
  18. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.00
    0.0014678922 = product of:
      0.0029357844 = sum of:
        0.0029357844 = product of:
          0.008807353 = sum of:
            0.008807353 = weight(_text_:a in 5787) [ClassicSimilarity], result of:
              0.008807353 = score(doc=5787,freq=24.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.22065444 = fieldWeight in 5787, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5787)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  19. Yitzhaki, M.: ¬A draft version of a consolidated thesaurus for the rapidly growing field of alternative medicine (2000) 0.00
    0.0014382347 = product of:
      0.0028764694 = sum of:
        0.0028764694 = product of:
          0.008629408 = sum of:
            0.008629408 = weight(_text_:a in 5417) [ClassicSimilarity], result of:
              0.008629408 = score(doc=5417,freq=4.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.2161963 = fieldWeight in 5417, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5417)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  20. Hausser, R.: Language and nonlanguage cognition (2021) 0.00
    0.0014382347 = product of:
      0.0028764694 = sum of:
        0.0028764694 = product of:
          0.008629408 = sum of:
            0.008629408 = weight(_text_:a in 255) [ClassicSimilarity], result of:
              0.008629408 = score(doc=255,freq=16.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.2161963 = fieldWeight in 255, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.

Years

Languages

  • e 36
  • d 7

Types