Search (46 results, page 1 of 3)

  • × type_ss:"p"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.03969669 = product of:
      0.07939338 = sum of:
        0.07624002 = product of:
          0.22872004 = sum of:
            0.22872004 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22872004 = score(doc=862,freq=2.0), product of:
                0.4069621 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04800207 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.0031533632 = product of:
          0.00946009 = sum of:
            0.00946009 = weight(_text_:a in 862) [ClassicSimilarity], result of:
              0.00946009 = score(doc=862,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.1709182 = fieldWeight in 862, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  2. Wätjen, H.-J.: Mensch oder Maschine? : Auswahl und Erschließung vonm Informationsressourcen im Internet (1996) 0.03
    0.029714495 = product of:
      0.05942899 = sum of:
        0.03775026 = weight(_text_:von in 3161) [ClassicSimilarity], result of:
          0.03775026 = score(doc=3161,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 3161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=3161)
        0.021678729 = product of:
          0.065036185 = sum of:
            0.065036185 = weight(_text_:22 in 3161) [ClassicSimilarity], result of:
              0.065036185 = score(doc=3161,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.38690117 = fieldWeight in 3161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3161)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Beschreibung der verschiedenen Hilfsmittel zur inhaltlichen Suche von Angeboten im Internet
    Date
    2. 2.1996 15:40:22
  3. Wachter, C.; Wille, R.: Formale Begriffsanalyse von Literaturdaten (1992) 0.02
    0.015100104 = product of:
      0.060400415 = sum of:
        0.060400415 = weight(_text_:von in 3141) [ClassicSimilarity], result of:
          0.060400415 = score(doc=3141,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.47163114 = fieldWeight in 3141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.125 = fieldNorm(doc=3141)
      0.25 = coord(1/4)
    
  4. Smith, R.: Nationalbibliographien auf CD-ROM : Entwicklung eines gemeinsamen Ansatzes (1993) 0.01
    0.013077075 = product of:
      0.0523083 = sum of:
        0.0523083 = weight(_text_:von in 6231) [ClassicSimilarity], result of:
          0.0523083 = score(doc=6231,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.40844458 = fieldWeight in 6231, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=6231)
      0.25 = coord(1/4)
    
    Abstract
    Dieses Papier beschreibt, wie ein EG-finanziertes Projekt, unter Einbeziehung von 7 Nationalbibliotheken, die Entwicklung gemeinsamer Problemansätze bei CD-ROM Veröffentlichungen gefördert hat. Das Projekt als ganzes wird beschrieben und auf die Hauptergebnisse hingewiesen, einschließlich der Formulierung einer allgemeinen RetrievalSchnittstelle und der Entwicklung eines UNIMARC-Pilotprojektes auf CD-ROM unter Einbeziehung von 4 Nationalbibliographien. Das Papier beschreibt weiterhin in detaillierter Form die Hauptaspekte der Retrieval-Schnittstelle und die Methodologie für die Entwicklung einer CD-ROM von 4 unterschiedlichen bibliographischen Daten mit jeweils verschiedenen Formaten und Zeichensätzen
  5. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.013077075 = product of:
      0.0523083 = sum of:
        0.0523083 = weight(_text_:von in 2568) [ClassicSimilarity], result of:
          0.0523083 = score(doc=2568,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.40844458 = fieldWeight in 2568, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=2568)
      0.25 = coord(1/4)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
  6. Wille, R.: Denken in Begriffen : von der griechischen Philosophie bis zur Künstlichen Intelligenz heute (1993) 0.01
    0.01144244 = product of:
      0.04576976 = sum of:
        0.04576976 = weight(_text_:von in 3145) [ClassicSimilarity], result of:
          0.04576976 = score(doc=3145,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.357389 = fieldWeight in 3145, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3145)
      0.25 = coord(1/4)
    
    Abstract
    Mechanistisches Denken und seine maschinelle Umsetzung (insbesondere in komplexe Computersysteme) gefährdet heute zunehmend die kognitive Autonomie des Menschen. Seinen besonderen Ausdruck findet dieses Denken in den Zielen der Künstlichen Intelligenz, denen die Metapher des künstlichen Menschen zugrunde liegt. Um die Beschränktheit mechanistischen Denkens deutlich werden zu lassen, wird die Geschichte des Begriffs von der griechischen Antike bis heute in ihren wichtigsten Stationen dargelegt. Das macht insbesondere den inhaltlichen Verlust sichtbar, den einschränkende Formalisierungen des Begriffsdenkens mit sich bringen. Es wird dafür plädiert, die enge Verbindung von Inhaltlichem und Formalem im Begriffsdenken zu reaktivieren; hierzu wird dem machanistischen Weltbild entgegengestellt das Weltbild der menschlichen Kommunikationsgemeinschaft, für das kommunikatives Denken und Handeln konstitutiv ist
  7. Dietze, J.: Sachkatalogisierung in einem OPAC (1993) 0.01
    0.01144244 = product of:
      0.04576976 = sum of:
        0.04576976 = weight(_text_:von in 7388) [ClassicSimilarity], result of:
          0.04576976 = score(doc=7388,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.357389 = fieldWeight in 7388, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7388)
      0.25 = coord(1/4)
    
    Abstract
    Katalogisierung über den Rechner bedeutet immer auch, einen OPAC aufzubauen, der es erlaubt, nach unterschiedlichen Merkmalen - formal und sachlich - und mit deren Kombination zu recherchieren. Die Freiwortrecherche wird von den Benutzern gern verwendet, obwohl dabei die Recall Ration (Vollständigkeit) nicht ausgeschöpft wird. Die Nutzung von Schlagwörtern sollte deren Standardisierung in einer Normdatei voraussetzen, um den Subjektivismus der Katalogisierer auszuschalten. Schlagwortketten sind prinzipiell für einen OPAC überflüssig. Bei Verwendung einer hierarchischen, d.h. systematischen Klassifikation sollte deren Notation systemkohärent, flexibel und synthetisch (Facetten bzw. Schlüssel) sein. Mit Hilfe von Registern sind Schlagwörter und Systemnotationen vice versa zu verknüpfen. Wenn Sachkatalogisierung im Verbund erfolgt, bilden Terminologiekontrolle und Einheitsklassifikation als Grobsysteme wichtige Desiderate
  8. Sander, C.; Schmiede, R.; Wille, R.: ¬Ein begriffliches Datensystem zur Literatur der interdisziplinären Technikforschung (1993) 0.01
    0.01144244 = product of:
      0.04576976 = sum of:
        0.04576976 = weight(_text_:von in 5255) [ClassicSimilarity], result of:
          0.04576976 = score(doc=5255,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.357389 = fieldWeight in 5255, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5255)
      0.25 = coord(1/4)
    
    Abstract
    Begriffliche Datensysteme sind im Rahmen der Formalen Begriffsanalyse entstanden und gründen sich auf mathematische Formalisierungen von Begriff, Begriffssystem und Begriffliche Datei. Sie machen Wissen, das in einer Datenbasis vorliegt, begrifflich zugänglich und interpretierbar. Hierfür werden begriffliche Zusammenhänge entsprechend gewählter Frageaspekte in gestuften Liniendiagrammen dargestellt. Durch Verfeinern, Vergröbern und Wechseln von Begriffstrukturen kann man unbegrenzt durch das in der Datenbasis gespeicherte Wissen "navigieren". In einem Forschungsprojekt, gefördert vom Zentrum für interdisziplinäre Technikforschung an der TH Darmstadt, ist ein Prototyp eines begrifflichen Datensystems erstellt worden, dem als Datenkontext eine ausgewählte, begrifflich aufgearbeitete Menge von Büchern zur interdisziplinären Technikforschung zugrunde liegt. Mit diesem Prototyp soll die flexible und variable Verwendung begrifflicher datensysteme im Literaturbereich demonstriert werden
  9. Hobohm, H.-C.: Zensur in der Digitalität - eine Überwindung der Moderne? : Die Rolle der Bibliotheken (2020) 0.01
    0.011325077 = product of:
      0.04530031 = sum of:
        0.04530031 = weight(_text_:von in 5371) [ClassicSimilarity], result of:
          0.04530031 = score(doc=5371,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 5371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=5371)
      0.25 = coord(1/4)
    
    Content
    Beitrag zur Tagung: "Nationalsozialismus Digital. Die Verantwortung von Bibliotheken, Archiven und Museen sowie Forschungseinrichtungen und Medien im Umgang mit der NSZeit im Netz." Österreichische Nationalbibliothek, Universität Wien, 27. - 29. November 2019
  10. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.01
    0.0098078055 = product of:
      0.039231222 = sum of:
        0.039231222 = weight(_text_:von in 1055) [ClassicSimilarity], result of:
          0.039231222 = score(doc=1055,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.30633342 = fieldWeight in 1055, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.046875 = fieldNorm(doc=1055)
      0.25 = coord(1/4)
    
    Abstract
    Die Zahl der mathematik-relevanten Publikationn steigt von Jahr zu Jahr an. Referatedienste wie da Zentralblatt MATH und Mathematical Reviews erfassen die bibliographischen Daten, erschließen die Arbeiten inhaltlich und machen sie - heute über Datenbanken, früher in gedruckter Form - für den Nutzer suchbar. Keywords sind ein wesentlicher Bestandteil der inhaltlichen Erschließung der Publikationen. Keywords sind meist keine einzelnen Wörter, sondern Mehrwortphrasen. Das legt die Anwendung linguistischer Methoden und Verfahren nahe. Die an der FH Köln entwickelte Software 'Lingo' wurde für die speziellen Anforderungen mathematischer Texte angepasst und sowohl zum Aufbau eines kontrollierten Vokabulars als auch zur Extraction von Keywords aus mathematischen Publikationen genutzt. Es ist geplant, über eine Verknüpfung von kontrolliertem Vokabular und der Mathematical Subject Classification Methoden für die automatische Klassifikation für den Referatedienst Zentralblatt MATH zu entwickeln und zu erproben.
  11. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.01
    0.009197506 = product of:
      0.036790024 = sum of:
        0.036790024 = product of:
          0.11037007 = sum of:
            0.11037007 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.11037007 = score(doc=3227,freq=4.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  12. Kollewe, W.; Sander, C.; Schmiede, R.; Wille, R.: TOSCANA als Instrument der bibliothekarischen Sacherschließung (1995) 0.01
    0.007550052 = product of:
      0.030200208 = sum of:
        0.030200208 = weight(_text_:von in 585) [ClassicSimilarity], result of:
          0.030200208 = score(doc=585,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.23581557 = fieldWeight in 585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=585)
      0.25 = coord(1/4)
    
    Abstract
    TOSCANA ist ein Computerprogramm, mit dem begriffliche Erkundungssysteme auf der Grundlage der Formalen Begriffsanalyse erstellt werden können.In der vorliegenden Arbeit wird diskutiert, wie TOSCANA zur bibliothekarischen Sacherschließung und thematischen Literatursuche eingesetzt werden kann. Berichtet wird dabei von dem Forschungsprojekt 'Anwendung eines Modells begrifflicher Wissenssysteme im Bereich der Literatur zur interdisziplinären Technikforschung', das vom Darmstädter Zentrum für interdisziplinäre Technikforschung gefördert worden ist
  13. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.006594871 = product of:
      0.026379485 = sum of:
        0.026379485 = product of:
          0.039569225 = sum of:
            0.007051134 = weight(_text_:a in 1171) [ClassicSimilarity], result of:
              0.007051134 = score(doc=1171,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 1171, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
            0.032518093 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.032518093 = score(doc=1171,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  14. Stephan, W.: Guidelines for subject authority and reference entries (GSARE) : a first step to a worldwide accepted standard (1992) 0.00
    0.0011633779 = product of:
      0.0046535116 = sum of:
        0.0046535116 = product of:
          0.013960535 = sum of:
            0.013960535 = weight(_text_:a in 2609) [ClassicSimilarity], result of:
              0.013960535 = score(doc=2609,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.25222903 = fieldWeight in 2609, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2609)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  15. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0011148823 = product of:
      0.004459529 = sum of:
        0.004459529 = product of:
          0.013378588 = sum of:
            0.013378588 = weight(_text_:a in 2131) [ClassicSimilarity], result of:
              0.013378588 = score(doc=2131,freq=20.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.24171482 = fieldWeight in 2131, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  16. Jaenecke, P.: Knowledge organization due to theory formation (1995) 0.00
    0.0010511212 = product of:
      0.0042044846 = sum of:
        0.0042044846 = product of:
          0.012613453 = sum of:
            0.012613453 = weight(_text_:a in 3751) [ClassicSimilarity], result of:
              0.012613453 = score(doc=3751,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22789092 = fieldWeight in 3751, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3751)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Theory formation is regarded as a process of domain-internal knowledge organization. Misunderstandings about the concept 'theory' are explained. A theory is considered as a systematical representation of a domain realized by three closely related theory-forming actions: establishment of a suitable system of basic concepts, ordering of the experience or given experimental results, synthesizing of conflicting hypotheses. In this view, theory formation means an ambitious kind of knowledge representation. Its consequences are summarized and its importance for the human sciences and for society is emphasized
  17. Lund, B.D.: ¬A chat with ChatGPT : how will AI impact scholarly publishing? (2022) 0.00
    0.0010511212 = product of:
      0.0042044846 = sum of:
        0.0042044846 = product of:
          0.012613453 = sum of:
            0.012613453 = weight(_text_:a in 850) [ClassicSimilarity], result of:
              0.012613453 = score(doc=850,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22789092 = fieldWeight in 850, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=850)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This is a short project that serves as an inspiration for a forthcoming paper, which will explore the technical side of ChatGPT and the ethical issues it presents for academic researchers, which will result in a peer-reviewed publication. This demonstrates that capacities of ChatGPT as a "chatbot" that is far more advanced than many alternatives available today and may even be able to be used to draft entire academic manuscripts for researchers. ChatGPT is available via https://chat.openai.com/chat.
  18. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.00
    0.0010177437 = product of:
      0.004070975 = sum of:
        0.004070975 = product of:
          0.012212924 = sum of:
            0.012212924 = weight(_text_:a in 5787) [ClassicSimilarity], result of:
              0.012212924 = score(doc=5787,freq=24.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22065444 = fieldWeight in 5787, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5787)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  19. Yitzhaki, M.: ¬A draft version of a consolidated thesaurus for the rapidly growing field of alternative medicine (2000) 0.00
    9.97181E-4 = product of:
      0.003988724 = sum of:
        0.003988724 = product of:
          0.011966172 = sum of:
            0.011966172 = weight(_text_:a in 5417) [ClassicSimilarity], result of:
              0.011966172 = score(doc=5417,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 5417, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5417)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  20. Hausser, R.: Language and nonlanguage cognition (2021) 0.00
    9.97181E-4 = product of:
      0.003988724 = sum of:
        0.003988724 = product of:
          0.011966172 = sum of:
            0.011966172 = weight(_text_:a in 255) [ClassicSimilarity], result of:
              0.011966172 = score(doc=255,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 255, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.

Years

Languages

  • e 33
  • d 13

Types