Search (1694 results, page 1 of 85)

  • × year_i:[1990 TO 2000}
  1. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.10
    0.09698563 = product of:
      0.3394497 = sum of:
        0.16256407 = weight(_text_:interpretation in 4483) [ClassicSimilarity], result of:
          0.16256407 = score(doc=4483,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.7594565 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.17688565 = sum of:
          0.116130754 = weight(_text_:anwendung in 4483) [ClassicSimilarity], result of:
            0.116130754 = score(doc=4483,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.6418954 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.06075489 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.06075489 = score(doc=4483,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.2857143 = coord(2/7)
    
    Date
    15. 3.2000 10:22:37
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Budd, J.M.: ¬The complexity of information retrieval : a hypothetical example (1996) 0.05
    0.045938455 = product of:
      0.16078459 = sum of:
        0.13547005 = weight(_text_:interpretation in 4928) [ClassicSimilarity], result of:
          0.13547005 = score(doc=4928,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6328804 = fieldWeight in 4928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.078125 = fieldNorm(doc=4928)
        0.02531454 = product of:
          0.05062908 = sum of:
            0.05062908 = weight(_text_:22 in 4928) [ClassicSimilarity], result of:
              0.05062908 = score(doc=4928,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.38690117 = fieldWeight in 4928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4928)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Inquiries made by academic library users are more complex than they may appear. Successful information retrieval based on complex queries is a function of cataloguing, classification, and the librarian's interpretation. Explores aspects of complexitiy using a proposed query as an example
    Source
    Journal of academic librarianship. 22(1996) no.2, S.111-117
  3. Kent, R.E.: Implications and rules in thesauri (1994) 0.04
    0.042024657 = product of:
      0.14708629 = sum of:
        0.10837604 = weight(_text_:interpretation in 3457) [ClassicSimilarity], result of:
          0.10837604 = score(doc=3457,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 3457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=3457)
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 3457) [ClassicSimilarity], result of:
              0.0774205 = score(doc=3457,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 3457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3457)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A central consideration in the study of whole language semantic space as encoded in thesauri is word sense comparability. Shows how word sense comparability can be adequately expressed by the logical implications and rules from Formal Concept Analysis. Formal concept analysis, a new approach to formal logic initiated by Rudolf Wille, has been used for data modelling, analysis and interpretation, and also for knowledge representation and knowledge discovery
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Marx, W.: Wie mißt man Forschungsqualität? : der Science Citation Index - ein Maßstab für die Bewertung (1996) 0.04
    0.042024657 = product of:
      0.14708629 = sum of:
        0.10837604 = weight(_text_:interpretation in 5036) [ClassicSimilarity], result of:
          0.10837604 = score(doc=5036,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 5036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=5036)
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 5036) [ClassicSimilarity], result of:
              0.0774205 = score(doc=5036,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 5036, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5036)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Ein überfordertes Gutachter-System, knapper fließende Forschungsgelder sowie die starke Faszination von Ranglisten bewirken zunehmend den Einsatz bibliometrischer Methoden zur Messung von Forschungsqualität. Grundlage der meisten Bewertungen ist der Science Citation Index, der nun auch in der Version als Online-Datenbank für umfangreiche Analysen genutzt werden kann. Erweiterungen der Retrievalsprache beim Host STN International ermöglichen statistische Analysen, die bisher nur dem SCI-Hersteller und wenigen Spezialisten vorbehalten waren. Voraussetzung für eine sinnvolle Anwendung sind vor allem die Wahl geeigneter Selektionskriterien sowie die sorgfältige Interpretation der Ergebnisse im Rahmen der Grenzen dieser Methoden
  5. Singh, S.: Ranganathan and reference services (1992) 0.04
    0.036750764 = product of:
      0.12862767 = sum of:
        0.10837604 = weight(_text_:interpretation in 2517) [ClassicSimilarity], result of:
          0.10837604 = score(doc=2517,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 2517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=2517)
        0.020251632 = product of:
          0.040503263 = sum of:
            0.040503263 = weight(_text_:22 in 2517) [ClassicSimilarity], result of:
              0.040503263 = score(doc=2517,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.30952093 = fieldWeight in 2517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2517)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Defines reference service and discusses Ranganathan's interpretation of and contribution to reference service under the following headings; development of reference service; 4 categories and holistic view of reference service; analyses of reference work and service; reference service and humanism; flair of the reference librarian; symbiosis of reference service and classification; and relevance of Ranganathan's contribution
    Source
    CLIS observer. 9(1992) nos.1/2, S.16-22
  6. Scholz, O.R.: Verstehen und Rationalität : Untersuchungen zu den Grundlagen von Hermeneutik und Sprachphilosophie (1999) 0.04
    0.036494624 = product of:
      0.12773117 = sum of:
        0.10837604 = weight(_text_:interpretation in 517) [ClassicSimilarity], result of:
          0.10837604 = score(doc=517,freq=8.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 517, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=517)
        0.019355126 = product of:
          0.03871025 = sum of:
            0.03871025 = weight(_text_:anwendung in 517) [ClassicSimilarity], result of:
              0.03871025 = score(doc=517,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.21396513 = fieldWeight in 517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03125 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Menschen sind Wesen, die etwas verstehen (oder mißverstehen) können. Das Buch behandelt die Grundlagen einer allgemeinen Theorie des Verstehens und der Interpretation. Eine Reise durch Formen hermeneutischer Reflexion führt von den Verfahren der Allegorese über die Hermeneutica generalis der Neuzeit bis zur Analytischen Philosophie der Sprache und der Philosophischen Hermeneutik H.-G. Gadamers. Bei allen Diskontinuitäten wird seit dem 17. Jahrhundert ein zusammenhängendes Projekt erkennbar, das die Bezeichnung "allgemeine Hermeneutik" verdient. Die Hermeneutik ist eine Disziplin der theoretischen Philosophie, eng verzahnt mit der Erkenntnistheorie, der Sprach- und Zeichenphilosophie, der Philosophie des Geistes sowie der Methodologie. Der Allgemeinheitscharakter der Hermeneutik hat zwei Aspekte. Sie hat einen weiten Gegenstandsbereich: alle Phänomene, bei denen ein Unterschied zwischen Richtig-und Falschverstehen intersubjektiv etabliert ist. Zudem sind allgemeine Grundsätze der Interpretation in Geltung, unter denen Prinzipien der hermeneutischen Billigkeit oder Nachsicht prominent sind: Wahrheits-, Konsistenz-und Rationalitätsunterstellungen. Im zweiten Teil wird der Status allgemeiner Interpretationsprinzipien geklärt: Sie sind Präsumtionsregeln mit widerleglichen Präsumtionen. Die hermeneutischen Präsumtionen sind konstitutive Bedingungen für die Praxen der Verständigung mit Zeichen und der alltagspsychologischen Erklärung von Handlungen sowie für die Anwendung der dabei zentralen Begriffe ("propositionale Einstellung", "Bedeutung", "Handlung" etc.). Im dritten Teil wird am Beispiel des Sprachverstehens vorgeführt, wie die Untersuchung zentraler Verstehensformen vonstatten gehen kann. Am Leitfaden des Verstehensbegriffs wird eine Neuorientierung der Sprachphilosophie vorgenommen.
    RSWK
    Interpretation
    Subject
    Interpretation
  7. Texte verstehen : Konzepte, Methoden, Werkzeuge (1994) 0.03
    0.034281626 = product of:
      0.11998569 = sum of:
        0.09579179 = weight(_text_:interpretation in 1661) [ClassicSimilarity], result of:
          0.09579179 = score(doc=1661,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 1661, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1661)
        0.024193907 = product of:
          0.048387814 = sum of:
            0.048387814 = weight(_text_:anwendung in 1661) [ClassicSimilarity], result of:
              0.048387814 = score(doc=1661,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2674564 = fieldWeight in 1661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1661)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Der Band faßt die Resultate des Projekts ATLAS zusammen und stellt diese mit weiterführenden Betrachtungen der Öffentlichkeit vor. Interdisziplinär werden verschiedene Aspekte von 'Text' durchleuchtet: Semiotik, Linguistik, Psychologie, Germanistik, Informatik, Verlagswesen - sie stellen die Beiträge dieses Buches. Bei der Komplexität des Themas 'Text' ist kein einheitliches disziplinunabhängiges Modell zur Beschreibung des Phänomens zu erwarten. Eher wird hier die vielfältige Facettierung sichtbar, mit der man sich auch dem Thema nähern kann. Vorgestellt werden auch die Werkzeuge, die prototypisch im Projekt ATLAS entwickelt wurden sowie die konkrete Anwendung dieser Methoden. Zum Abschluß des Buches wird die zukünftige Rolle von Text unter dem Einfluß der 'neuen Medien' problematisiert
    RSWK
    Text / Interpretation / Kongreß / Berlin «1992» (BVB)
    Subject
    Text / Interpretation / Kongreß / Berlin «1992» (BVB)
  8. Grivel, L.; Mutschke, P.; Polanco, X.: Thematic mapping on bibliographic databases by cluster analysis : a description of the SDOC environment with SOLIS (1995) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 1900) [ClassicSimilarity], result of:
          0.09482904 = score(doc=1900,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 1900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1900)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 1900) [ClassicSimilarity], result of:
              0.035440356 = score(doc=1900,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 1900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1900)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper presents a coword-analysis-based system called SDOC which is able to pupport the intellectual work of an end-user who is searching for information in a bibliographic database. This is done by presenting its thematical structure as a map of keyword clusters (themes) on a graphical user interface. These mapping facilities are demonstrated on the basis of the research field Social History given by a set of documents from the social science literature database SOLIS. Besides the traditional way of analysing a coword map as a strategic diagram, the notion of cluster relationships analysis is introduced which provides an adequate interpretation of links between themes
    Source
    Knowledge organization. 22(1995) no.2, S.70-77
  9. Ingenerf, J.: Benutzeranpaßbare semantische Sprachanalyse und Begriffsrepräsentation für die medizinische Dokumentation (1993) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 1903) [ClassicSimilarity], result of:
          0.09482904 = score(doc=1903,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 1903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1903)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 1903) [ClassicSimilarity], result of:
              0.035440356 = score(doc=1903,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 1903, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1903)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This dissertation aims at representing concepts of medical subjects as well as statements on medical subjects independent of medical terminology, in order to overcome the problems of interpretation and further elaboration of entries in an ordering system such as SNOMED, MeSH-Thesaurus or ICD Classification. The approach of a compositional, semantic language analysis developed here implies specific demands from a conceptual modelling and utilizes the following methods: (1) a characteristics-based grammar formalism for represenatation of terminological structures, to be analyzed, (2) a terminological representation formalism for a conceptual-intensionally-oriented representation of a concept system, and (3) a modified chart-parser for grammatical derivation
    Footnote
    Rez. in: Knowledge organization 22(1995) no.2, S.102-103 (P. Hucklenbroich)
  10. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 5860) [ClassicSimilarity], result of:
          0.09482904 = score(doc=5860,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 5860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5860)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5860,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5860)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Richer graph models permit authors to 'program' the browsing behaviour they want WWW readers to see by turning the hypertext into a hyperprogram with specific semantics. Multiple browsing streams can be started under the author's control and then kept in step through the synchronization mechanisms provided by the graph model. Adds a Semantic Web Graph Layer (SWGL) which allows dynamic interpretation of link and node structures according to graph models. Details the SWGL and its architecture, some sample protocol implementations, and the latest extensions to MHTML
    Date
    1. 8.1996 22:08:06
  11. Priss, U.: Faceted knowledge representation (1999) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 2654) [ClassicSimilarity], result of:
          0.09482904 = score(doc=2654,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.035440356 = score(doc=2654,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  12. Jones, S.: ¬A thesaurus data model for an intelligent retrieval system (1993) 0.03
    0.031518497 = product of:
      0.11031473 = sum of:
        0.081282035 = weight(_text_:interpretation in 5279) [ClassicSimilarity], result of:
          0.081282035 = score(doc=5279,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 5279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5279)
        0.029032689 = product of:
          0.058065377 = sum of:
            0.058065377 = weight(_text_:anwendung in 5279) [ClassicSimilarity], result of:
              0.058065377 = score(doc=5279,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.3209477 = fieldWeight in 5279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5279)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper demonstrates the application of conventional database design techniques to thesaurus representation. The thesaurus is considered as a printed document, as a semantic net, and as a relational database to be used in conjunction with an intelligent information retrieval system. Some issues raised by analysis of two standard thesauri include: the prevalence of compound terms and the representation of term structure; thesaurus redundancy and the extent to which it can be eliminated in machine-readable versions; the difficulty of exploiting thesaurus knowledge originally designed for human rather than automatic interpretation; deriving 'strength of association' measures between terms in a thesaurus considered as a semantic net; facet representation and the need for variations in the data model to cater for structural differences between thesauri. A complete schema of database tables is presented, with an outline suggestion for using the stored information when matching one or more thesaurus terms with a user's query
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Umstätter, W.: Schrift, Information, Interpretation und Wissen (1992) 0.03
    0.030964585 = product of:
      0.21675208 = sum of:
        0.21675208 = weight(_text_:interpretation in 8990) [ClassicSimilarity], result of:
          0.21675208 = score(doc=8990,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            1.0126086 = fieldWeight in 8990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.125 = fieldNorm(doc=8990)
      0.14285715 = coord(1/7)
    
  14. Jochum, U.: Bibliothek, Buch und Information (1991) 0.03
    0.030964585 = product of:
      0.21675208 = sum of:
        0.21675208 = weight(_text_:interpretation in 1205) [ClassicSimilarity], result of:
          0.21675208 = score(doc=1205,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            1.0126086 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.125 = fieldNorm(doc=1205)
      0.14285715 = coord(1/7)
    
    Footnote
    Erwiderung darauf: Umstätter, W.: Schrift, Information, Interpretation und Wissen
  15. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.03
    0.030964585 = product of:
      0.21675208 = sum of:
        0.21675208 = weight(_text_:interpretation in 6830) [ClassicSimilarity], result of:
          0.21675208 = score(doc=6830,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            1.0126086 = fieldWeight in 6830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
      0.14285715 = coord(1/7)
    
  16. Lepsky, K.: Ernst H. Gombrich : Theorie und Methode (1991) 0.03
    0.028442785 = product of:
      0.19909948 = sum of:
        0.19909948 = weight(_text_:interpretation in 1685) [ClassicSimilarity], result of:
          0.19909948 = score(doc=1685,freq=12.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.9301404 = fieldWeight in 1685, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=1685)
      0.14285715 = coord(1/7)
    
    LCSH
    Gombrich, E. H. (Ernst Hans), 1909 / 2001 / Criticism and interpretation
    RSWK
    Gombrich, Ernst H. / Interpretation / Kunst / Methode (BVB)
    Gombrich, Ernst H. / Interpretation / Kunst (BVB)
    Subject
    Gombrich, Ernst H. / Interpretation / Kunst / Methode (BVB)
    Gombrich, Ernst H. / Interpretation / Kunst (BVB)
    Gombrich, E. H. (Ernst Hans), 1909 / 2001 / Criticism and interpretation
  17. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.03
    0.027563075 = product of:
      0.09647076 = sum of:
        0.081282035 = weight(_text_:interpretation in 1757) [ClassicSimilarity], result of:
          0.081282035 = score(doc=1757,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 1757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=1757)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.030377446 = score(doc=1757,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  18. Johnson, F.C.: ¬A natural language understanding system for reference resolution in information dialogues (1996) 0.03
    0.027563075 = product of:
      0.09647076 = sum of:
        0.081282035 = weight(_text_:interpretation in 6968) [ClassicSimilarity], result of:
          0.081282035 = score(doc=6968,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 6968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=6968)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 6968) [ClassicSimilarity], result of:
              0.030377446 = score(doc=6968,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 6968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6968)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Demonstrates how a natural language understanding (NLU) system can be developed to utilize low level pragmatic information, primarily the linguistic context, to deal effectively with the linguistic devices of anaphors and ellipsis and allow users to query databases for reference information without the need for menus or fixed phrases and query structures. Describes the components which comprise a NLU system to deal with continuous dialogue. Given that the syntactic and semantic information can produce a suitable information of each utterance, pragmatic information may be used to determine how this contectual information determines the interpretation of subsequent utterances. Suggests that the approach taken allows the system to provide a cooperative response to assist the user in attaining the information seeking goal
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  19. Srihari, R.K.: Using speech input for image interpretation, annotation, and retrieval (1997) 0.03
    0.027563075 = product of:
      0.09647076 = sum of:
        0.081282035 = weight(_text_:interpretation in 764) [ClassicSimilarity], result of:
          0.081282035 = score(doc=764,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=764)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 764) [ClassicSimilarity], result of:
              0.030377446 = score(doc=764,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=764)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 9.1997 19:16:05
  20. Däßler, R.; Palm, H.: Virtuelle Informationsräume mit VRML : Informationen recherchieren und präsentieren in 3D (1997) 0.03
    0.027563075 = product of:
      0.09647076 = sum of:
        0.081282035 = weight(_text_:interpretation in 2280) [ClassicSimilarity], result of:
          0.081282035 = score(doc=2280,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 2280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=2280)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 2280) [ClassicSimilarity], result of:
              0.030377446 = score(doc=2280,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 2280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2280)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Die Recherche nach Informationen ist eine der wichtigsten Tätigkeiten bei der Arbeit mit dem Internet. Bisher geschieht dies hauptsächlich textbasiert mit Hilfe von Suchmaschinen oder thematische Katalogen. ein neuer Zugang zu Informationen ist die raumbezogene Visualisierung, eine Technik, die bei der Darstellung und Interpretation von wissenschaftlichen Daten heutzutage zum Standard gehört. Die 3D-Visualisierung läßt sich aber auch einsetzen, um Textinformationen zu recherchieren und zu präsentieren. Mit ihr werden virtuelle Informationsräume erzeugt, die man wie mit einem Flugsimulator durchfliegen kann, um nach Informationen zu suchen. Wie solche 3D-Benutzerschnittstellen aussehen und wie man sie mit Hilfe von VRML erzeugen kann, ist das Thema dieses Buches
    Date
    17. 7.2002 16:32:22

Languages

Types

  • a 1393
  • m 166
  • s 87
  • el 26
  • x 16
  • i 15
  • r 13
  • b 8
  • ? 6
  • p 5
  • d 3
  • n 3
  • h 2
  • au 1
  • l 1
  • u 1
  • More… Less…

Themes

Subjects

Classifications