Search (68 results, page 2 of 4)

  • × type_ss:"p"
  1. Grunst, G.; Thomas, Christoph; Oppermann, R.: Intelligente Benutzerschnittstellen : kontext-sensitive Hilfen und Adaptivität (1991) 0.01
    0.008935061 = product of:
      0.053610366 = sum of:
        0.053610366 = weight(_text_:und in 570) [ClassicSimilarity], result of:
          0.053610366 = score(doc=570,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.55409175 = fieldWeight in 570, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=570)
      0.16666667 = coord(1/6)
    
    Imprint
    Sankt Augustin : Gesellschaft für Mathematik und Datenverarbeitung
  2. Großjohann, K.: Gathering-, Harvesting-, Suchmaschinen (1996) 0.01
    0.008364413 = product of:
      0.050186478 = sum of:
        0.050186478 = product of:
          0.100372955 = sum of:
            0.100372955 = weight(_text_:22 in 3227) [ClassicSimilarity], result of:
              0.100372955 = score(doc=3227,freq=4.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.6565931 = fieldWeight in 3227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3227)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    7. 2.1996 22:38:41
    Pages
    22 S
  3. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 1171) [ClassicSimilarity], result of:
          0.006310384 = score(doc=1171,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 1171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029572664 = score(doc=1171,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  4. Williamson, N.J.: Online Klassifikation : Gegenwart und Zukunft (1988) 0.01
    0.006318042 = product of:
      0.037908252 = sum of:
        0.037908252 = weight(_text_:und in 765) [ClassicSimilarity], result of:
          0.037908252 = score(doc=765,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.39180204 = fieldWeight in 765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=765)
      0.16666667 = coord(1/6)
    
  5. Austin, D.: PRECIS: Grundprinzipien, Funktion und Anwendung (1983) 0.01
    0.006318042 = product of:
      0.037908252 = sum of:
        0.037908252 = weight(_text_:und in 1001) [ClassicSimilarity], result of:
          0.037908252 = score(doc=1001,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.39180204 = fieldWeight in 1001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=1001)
      0.16666667 = coord(1/6)
    
  6. Schaper, K.-H.: Vorteile und Grenzen eines konventionellen Archivs (1984) 0.01
    0.006318042 = product of:
      0.037908252 = sum of:
        0.037908252 = weight(_text_:und in 1851) [ClassicSimilarity], result of:
          0.037908252 = score(doc=1851,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.39180204 = fieldWeight in 1851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=1851)
      0.16666667 = coord(1/6)
    
  7. Grötschel, M.; Lügger, J.; Sperber, W.: Wissenschaftliches Publizieren und elektronische Fachinformation im Umbruch : ein Situationsbericht aus der Sicht der Mathematik (1993) 0.01
    0.005528287 = product of:
      0.03316972 = sum of:
        0.03316972 = weight(_text_:und in 1946) [ClassicSimilarity], result of:
          0.03316972 = score(doc=1946,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.34282678 = fieldWeight in 1946, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=1946)
      0.16666667 = coord(1/6)
    
  8. Allen, G.G.: Change in the catalogue in the context of library management (1976) 0.00
    0.003365538 = product of:
      0.020193228 = sum of:
        0.020193228 = weight(_text_:in in 1575) [ClassicSimilarity], result of:
          0.020193228 = score(doc=1575,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.34006363 = fieldWeight in 1575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1575)
      0.16666667 = coord(1/6)
    
  9. Kollewe, W.; Sander, C.; Schmiede, R.; Wille, R.: TOSCANA als Instrument der bibliothekarischen Sacherschließung (1995) 0.00
    0.003159021 = product of:
      0.018954126 = sum of:
        0.018954126 = weight(_text_:und in 585) [ClassicSimilarity], result of:
          0.018954126 = score(doc=585,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=585)
      0.16666667 = coord(1/6)
    
    Abstract
    TOSCANA ist ein Computerprogramm, mit dem begriffliche Erkundungssysteme auf der Grundlage der Formalen Begriffsanalyse erstellt werden können.In der vorliegenden Arbeit wird diskutiert, wie TOSCANA zur bibliothekarischen Sacherschließung und thematischen Literatursuche eingesetzt werden kann. Berichtet wird dabei von dem Forschungsprojekt 'Anwendung eines Modells begrifflicher Wissenssysteme im Bereich der Literatur zur interdisziplinären Technikforschung', das vom Darmstädter Zentrum für interdisziplinäre Technikforschung gefördert worden ist
  10. Jouguelet, S.: Subject access and the marketplace for bibliographic information in France (1989) 0.00
    0.0029448462 = product of:
      0.017669076 = sum of:
        0.017669076 = weight(_text_:in in 998) [ClassicSimilarity], result of:
          0.017669076 = score(doc=998,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.29755569 = fieldWeight in 998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=998)
      0.16666667 = coord(1/6)
    
    Abstract
    Enthält auch Beschreibung der Sacherschließungssystemen in den Nationalbibliographien
  11. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    0.0028220895 = product of:
      0.016932536 = sum of:
        0.016932536 = weight(_text_:in in 976) [ClassicSimilarity], result of:
          0.016932536 = score(doc=976,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.28515202 = fieldWeight in 976, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=976)
      0.16666667 = coord(1/6)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
  12. Geißelmann, F.: Perspektiven für eine DDC-Anwendung in den Bibliotheksverbünden (1999) 0.00
    0.0025762038 = product of:
      0.015457222 = sum of:
        0.015457222 = weight(_text_:in in 486) [ClassicSimilarity], result of:
          0.015457222 = score(doc=486,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.260307 = fieldWeight in 486, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=486)
      0.16666667 = coord(1/6)
    
    Content
    Vortrag, anlässlich des 89. Deutscher Bibliothekartag in Freiburg im Breisgau 1999: Grenzenlos in die Zukunft
  13. Hausser, R.: Language and nonlanguage cognition (2021) 0.00
    0.0025241538 = product of:
      0.015144923 = sum of:
        0.015144923 = weight(_text_:in in 255) [ClassicSimilarity], result of:
          0.015144923 = score(doc=255,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.25504774 = fieldWeight in 255, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=255)
      0.16666667 = coord(1/6)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  14. Pejtersen, A.M.; Jensen, H.; Speck, P.; Villumsen, S.; Weber, S.: Catalogs for children : the Book House project on visualization of database retrieval and classification (1993) 0.00
    0.0024665273 = product of:
      0.014799163 = sum of:
        0.014799163 = weight(_text_:in in 6232) [ClassicSimilarity], result of:
          0.014799163 = score(doc=6232,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24922498 = fieldWeight in 6232, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6232)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes the Book House system which is designed to support children's information retrieval in libraries as part of their education. It is a shareware program available on CD-ROM and discs, and comprises functionality for database searching as well as for the classification and storage of book information in the database. The system concept is based on an understanding of children's domain structures and their capabilities for categorization of information needs in connection with their activities in public libraries, in school libraries or in schools. These structures are visualized in the interface by using metaphors and multimedia technology. Through the use of text, images and animation, the Book House supports children - even at a very early age - to learn by doing in an enjoyable way which plays on their previous experiences with computer games. Both words and pictures can be used for searching; this makes the system suitable for all age groups. Even children who have not yet learned to read properly can by selecting pictures search for and find books they would like to have read aloud. Thus at the very beginning of their school period, they can learn to search for books on their own. For the library community itself, such a system will provide an extended service which will increase the number of children's own searches and also improve the relevance, quality and utilization of the collections in the libraries. A market research on the need for an annual indexing service for books in the Book House format is in preparation by the Danish Library Center
  15. Holley, R.P.: Classification in the USA (1985) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1730) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1730,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1730)
      0.16666667 = coord(1/6)
    
  16. Jouguelet, S.: Subject indexing in France : tools and projects (1985) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1742) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1742,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1742)
      0.16666667 = coord(1/6)
    
  17. Kelm, B.: Computergestützte Sacherschließung in der Deutschen Bibliothek Frankfurt am Main (1983) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1745) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1745,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1745, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1745)
      0.16666667 = coord(1/6)
    
  18. Satija, M.P.: Classification and indexing in India : a state-of-the-art (1992) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1539) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1539,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1539)
      0.16666667 = coord(1/6)
    
  19. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 5365) [ClassicSimilarity], result of:
          0.013115887 = score(doc=5365,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 5365, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.16666667 = coord(1/6)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  20. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 7953) [ClassicSimilarity], result of:
          0.012620768 = score(doc=7953,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 7953, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1

Years

Languages

  • e 44
  • d 24

Types