Search (911 results, page 2 of 46)

  • × year_i:[1980 TO 1990}
  1. Repo, A.J.: ¬The dual approach to the value of information : an appraisal of use and exchange values (1989) 0.03
    0.034321602 = product of:
      0.068643205 = sum of:
        0.02404301 = weight(_text_:to in 5772) [ClassicSimilarity], result of:
          0.02404301 = score(doc=5772,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.28121543 = fieldWeight in 5772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.109375 = fieldNorm(doc=5772)
        0.044600192 = product of:
          0.089200385 = sum of:
            0.089200385 = weight(_text_:22 in 5772) [ClassicSimilarity], result of:
              0.089200385 = score(doc=5772,freq=2.0), product of:
                0.16467917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04702661 = queryNorm
                0.5416616 = fieldWeight in 5772, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5772)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information processing and management. 22(1986) no.5, S.373-383
  2. Ruge, G.; Schwarz, C.: Natural language access to free-text data bases (1989) 0.03
    0.032741394 = product of:
      0.06548279 = sum of:
        0.017000975 = weight(_text_:to in 3567) [ClassicSimilarity], result of:
          0.017000975 = score(doc=3567,freq=4.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.19884932 = fieldWeight in 3567, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3567)
        0.04848181 = product of:
          0.09696362 = sum of:
            0.09696362 = weight(_text_:language in 3567) [ClassicSimilarity], result of:
              0.09696362 = score(doc=3567,freq=6.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.5255505 = fieldWeight in 3567, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3567)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Problems of indexing and searching free-text data bases are discussed in detail. The possibilities and limitations of Boolean searching are shown. An experimental system COPSY (Context operator syntax) that was built in order to avoid common errors connected with Boolean search is outlined. This system permits, as input, any natural language search question formulation and yields, as output, documents ranked on the basis of an automatically calculated correspondence between natural language search questions and content-based analysis of documents. COPSY is part of a text processing project at Siemens AG called TINA (Text-Inhalts-Analyse...). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent searching and indexing.
  3. Pacey, P.: ¬The classification of literature in the Dewey Decimal Classification : the primacy of language and the taint of colonialism (1989) 0.03
    0.032741394 = product of:
      0.06548279 = sum of:
        0.017000975 = weight(_text_:to in 448) [ClassicSimilarity], result of:
          0.017000975 = score(doc=448,freq=4.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.19884932 = fieldWeight in 448, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=448)
        0.04848181 = product of:
          0.09696362 = sum of:
            0.09696362 = weight(_text_:language in 448) [ClassicSimilarity], result of:
              0.09696362 = score(doc=448,freq=6.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.5255505 = fieldWeight in 448, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=448)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The classification of literature by language, while apparently eminently sensible, can fragment national literatures and create groupings in which parts of the literatures of other nations are subordinated to that of the country of the "mother tongue." Classifying literature in this way reflects, and implicitly endorses, literary colonialism. The exception which the Dewey Classification makes of American literature in English undermines the rule, and is not matched by similar treatment for African literature, for example. A more flexible approach to classifying literature is called for, which will recognize place as the ground of community and culture while not forgetting the importance of language as an element in national and cultural identity.
  4. Vledutz-Stokolov, N.: Concept recognition in an automatic text-processing system for the life sciences (1987) 0.03
    0.03252077 = product of:
      0.06504154 = sum of:
        0.012143553 = weight(_text_:to in 2849) [ClassicSimilarity], result of:
          0.012143553 = score(doc=2849,freq=4.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.14203523 = fieldWeight in 2849, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2849)
        0.05289799 = product of:
          0.10579598 = sum of:
            0.10579598 = weight(_text_:language in 2849) [ClassicSimilarity], result of:
              0.10579598 = score(doc=2849,freq=14.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.57342255 = fieldWeight in 2849, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes a natural-language text-processing system designed as an automatic aid to subject indexing at BIOSIS. The intellectual procedure the system should model is a deep indexing with a controlled vocabulary of biological concepts - Concept Headings (CHs). On the average, ten CHs are assigned to each article by BIOSIS indexers. The automatic procedure consists of two stages: (1) translation of natural-language biological titles into title-semantic representations which are in the constructed formalized language of Concept Primitives, and (2) translation of the latter representations into the language of CHs. The first stage is performed by matching the titles agianst the system's Semantic Vocabulary (SV). The SV currently contains approximately 15.000 biological natural-language terms and their translations in the language of Concept Primitives. Tor the ambiguous terms, the SV contains the algorithmical rules of term disambiguation, ruels based on semantic analysis of the contexts. The second stage of the automatic procedure is performed by matching the title representations against the CH definitions, formulated as Boolean search strategies in the language of Concept Primitives. Three experiments performed with the system and their results are decribed. The most typical problems the system encounters, the problems of lexical and situational ambiguities, are discussed. The disambiguation techniques employed are described and demonstrated in many examples
  5. Teodorescu, I.: Artificial intelligence and information retrieval (1987) 0.03
    0.032137115 = product of:
      0.06427423 = sum of:
        0.024287106 = weight(_text_:to in 542) [ClassicSimilarity], result of:
          0.024287106 = score(doc=542,freq=4.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.28407046 = fieldWeight in 542, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.078125 = fieldNorm(doc=542)
        0.03998712 = product of:
          0.07997424 = sum of:
            0.07997424 = weight(_text_:language in 542) [ClassicSimilarity], result of:
              0.07997424 = score(doc=542,freq=2.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4334667 = fieldWeight in 542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.078125 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Comparing artificial intelligence and information retrieval paradigms for natural language understanding provides a basis for reviewing progress to date. The applicability of artificial intelligence to question-answering systems is outlined. A list of the principal artificial intelligence software for data base front-end systems is appended.
  6. Smith, J.M.: ¬The Standard Generalized Markup Language (SGML) : guidelines for authors (1987) 0.03
    0.032137115 = product of:
      0.06427423 = sum of:
        0.024287106 = weight(_text_:to in 5946) [ClassicSimilarity], result of:
          0.024287106 = score(doc=5946,freq=4.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.28407046 = fieldWeight in 5946, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.078125 = fieldNorm(doc=5946)
        0.03998712 = product of:
          0.07997424 = sum of:
            0.07997424 = weight(_text_:language in 5946) [ClassicSimilarity], result of:
              0.07997424 = score(doc=5946,freq=2.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4334667 = fieldWeight in 5946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5946)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Guidelines for authors of scholarly publications who wish to prepare documents for a publisher on existing text entry devices, word processors and personal computers, adding markup to the text in accordance with the SGML
  7. Meadow, C.T.; Cerny, B.A.; Borgman, C.L.; Case, D.O.: Online access to knowledge : system design (1989) 0.03
    0.03153736 = product of:
      0.06307472 = sum of:
        0.02914453 = weight(_text_:to in 813) [ClassicSimilarity], result of:
          0.02914453 = score(doc=813,freq=16.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.34088457 = fieldWeight in 813, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=813)
        0.033930197 = product of:
          0.067860395 = sum of:
            0.067860395 = weight(_text_:language in 813) [ClassicSimilarity], result of:
              0.067860395 = score(doc=813,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.3678087 = fieldWeight in 813, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=813)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The project online access to knowledge (OAK) has developed a computer intermediary for delected users of the Department of Energy's DOE/RECON and BASIS online information retrieval systems. Its purpose is to enable people who have little or no training or experience in bibliographic searching to conduct their own searches, without the assistance of a trained librarian. hence permitting the user to work in both a place and time of his or her choosing. The purpose of this article is to report on the design and the rationale for the design. OAK software consists of both a tutorial and an assistance program. The latter does not employ a command language, hence obviates the need for a searcher to learn the formal language usually associated with an online database search service. It is central to our approach that this system does not supplant the user's ultimate primacy in knowing what he or she is looking for, nor in judging the results
  8. Kaiser, J.O.: Systematic indexing (1985) 0.03
    0.031487815 = product of:
      0.06297563 = sum of:
        0.02379641 = weight(_text_:to in 571) [ClassicSimilarity], result of:
          0.02379641 = score(doc=571,freq=24.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.2783311 = fieldWeight in 571, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.03125 = fieldNorm(doc=571)
        0.039179217 = product of:
          0.078358434 = sum of:
            0.078358434 = weight(_text_:language in 571) [ClassicSimilarity], result of:
              0.078358434 = score(doc=571,freq=12.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4247089 = fieldWeight in 571, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=571)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A native of Germany and a former teacher of languages and music, Julius Otto Kaiser (1868-1927) came to the Philadelphia Commercial Museum to be its librarian in 1896. Faced with the problem of making "information" accessible, he developed a method of indexing he called systematic indexing. The first draft of his scheme, published in 1896-97, was an important landmark in the history of subject analysis. R. K. Olding credits Kaiser with making the greatest single advance in indexing theory since Charles A. Cutter and John Metcalfe eulogizes him by observing that "in sheer capacity for really scientific and logical thinking, Kaiser's was probably the best mind that has ever applied itself to subject indexing." Kaiser was an admirer of "system." By systematic indexing he meant indicating information not with natural language expressions as, for instance, Cutter had advocated, but with artificial expressions constructed according to formulas. Kaiser grudged natural language its approximateness, its vagaries, and its ambiguities. The formulas he introduced were to provide a "machinery for regularising or standardising language" (paragraph 67). Kaiser recognized three categories or "facets" of index terms: (1) terms of concretes, representing things, real or imaginary (e.g., money, machines); (2) terms of processes, representing either conditions attaching to things or their actions (e.g., trade, manufacture); and (3) terms of localities, representing, for the most part, countries (e.g., France, South Africa). Expressions in Kaiser's index language were called statements. Statements consisted of sequences of terms, the syntax of which was prescribed by formula. These formulas specified sequences of terms by reference to category types. Only three citation orders were permitted: a term in the concrete category followed by one in the process category (e.g., Wool-Scouring); (2) a country term followed by a process term (e.g., Brazil - Education); and (3) a concrete term followed by a country term, followed by a process term (e.g., Nitrate-Chile-Trade). Kaiser's system was a precursor of two of the most significant developments in twentieth-century approaches to subject access-the special purpose use of language for indexing, thus the concept of index language, which was to emerge as a generative idea at the time of the second Cranfield experiment (1966) and the use of facets to categorize subject indicators, which was to become the characterizing feature of analytico-synthetic indexing methods such as the Colon classification. In addition to its visionary quality, Kaiser's work is notable for its meticulousness and honesty, as can be seen, for instance, in his observations about the difficulties in facet definition.
  9. Johansen, T.: ¬An outline of a non-linguistic approach to subject-relationships (1985) 0.03
    0.030596204 = product of:
      0.06119241 = sum of:
        0.02726221 = weight(_text_:to in 701) [ClassicSimilarity], result of:
          0.02726221 = score(doc=701,freq=14.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.3188683 = fieldWeight in 701, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=701)
        0.033930197 = product of:
          0.067860395 = sum of:
            0.067860395 = weight(_text_:language in 701) [ClassicSimilarity], result of:
              0.067860395 = score(doc=701,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.3678087 = fieldWeight in 701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Not language itself, but reality displayed by means of language should be the object of investigation. One must try to look behind linguistic expressions in attempting to visualize this reality, especially when one is concerned with subjects and relationships, which cannot be made objects of direct observations (immaterial subjects and relationships). This lead to the well-known fact, that subject-relationships are of two kinds: static or dynamic, where the last named covers what in linguistic terminology is labeled: processes, actions and action-processes. As one or two subject-connections are always present in a dynamic subject-connection, it is reasonable to consider this type of connection as the framework inside which the contents of a sentence is suspended. Another characterisitc of great importance is the fact, that even if a dynamic connection can be expressed in one sentence, one sentence sometimes contains linguistic expressions of subjects that do not belong to the dynamic connection in question. This in its turn leads to the question of mutual relationships between connections, which is only touched upon in this paper
  10. Tomasselli, G.: Erfahrungen beim Einsatz eines PROLOG-Programms auf Mikrorechnern : zur Erfassung und Prüfung bibliographischer Daten; PROLOG als Mittel zur Beschreibung bibliographischen Wissens (1989) 0.03
    0.030203545 = product of:
      0.06040709 = sum of:
        0.020821858 = weight(_text_:to in 2493) [ClassicSimilarity], result of:
          0.020821858 = score(doc=2493,freq=6.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.24353972 = fieldWeight in 2493, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2493)
        0.039585233 = product of:
          0.079170465 = sum of:
            0.079170465 = weight(_text_:language in 2493) [ClassicSimilarity], result of:
              0.079170465 = score(doc=2493,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.42911017 = fieldWeight in 2493, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2493)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Outlines the advantages of knowledge-based systems, developed as a form of artificial intelligence and transformed into partially effective expert systems, including the concept of logical programming or defining all relevant knowledge to satisfy logical conditions or IF-THEN rules, instead of a traditional, algorithmic programming language. Links the features of PROLOG to these concepts, along with its capacity as a machine language for future 5th generation computers, including microcomputers. Examines how logical programming allows bibliographical data and processes to be described, and the development of the inter-library bibliographic data base DALIS for producing booklists for state libraries.
  11. Riggs, F.W.: Information and social science : the need for onomantics (1989) 0.03
    0.030203545 = product of:
      0.06040709 = sum of:
        0.020821858 = weight(_text_:to in 2842) [ClassicSimilarity], result of:
          0.020821858 = score(doc=2842,freq=6.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.24353972 = fieldWeight in 2842, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2842)
        0.039585233 = product of:
          0.079170465 = sum of:
            0.079170465 = weight(_text_:language in 2842) [ClassicSimilarity], result of:
              0.079170465 = score(doc=2842,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.42911017 = fieldWeight in 2842, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2842)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The special language used by authors in writing up their research is differentiated into cryptic and delphic modes, depending on the ways the terms (names) for new concepts are produced. The 1st approach, characterstic of the language of natural and applied sciences, is based on the coining of absolutely new words (mainly acronyms) or word phrases. The 2nd one, widely applied by social scientists, relies on the assignment of new meanings to familiar terms. The latter confuses readers and hampers their understanding of the authors' ideas, making the precise indexing of social science publications dubious. Proposes a new, 'onomantic' approach that, in contrast to the conventional semantic paradigm, proceeds from the definition of a concept to identification of a corresponding unequivocal term, the final product being a dictionary of concepts, or 'nomenclator'.
  12. Salton, G.: Automatic processing of foreign language documents (1985) 0.03
    0.02978099 = product of:
      0.05956198 = sum of:
        0.02379641 = weight(_text_:to in 3650) [ClassicSimilarity], result of:
          0.02379641 = score(doc=3650,freq=24.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.2783311 = fieldWeight in 3650, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.03125 = fieldNorm(doc=3650)
        0.03576557 = product of:
          0.07153114 = sum of:
            0.07153114 = weight(_text_:language in 3650) [ClassicSimilarity], result of:
              0.07153114 = score(doc=3650,freq=10.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.38770443 = fieldWeight in 3650, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3650)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The attempt to computerize a process, such as indexing, abstracting, classifying, or retrieving information, begins with an analysis of the process into its intellectual and nonintellectual components. That part of the process which is amenable to computerization is mechanical or algorithmic. What is not is intellectual or creative and requires human intervention. Gerard Salton has been an innovator, experimenter, and promoter in the area of mechanized information systems since the early 1960s. He has been particularly ingenious at analyzing the process of information retrieval into its algorithmic components. He received a doctorate in applied mathematics from Harvard University before moving to the computer science department at Cornell, where he developed a prototype automatic retrieval system called SMART. Working with this system he and his students contributed for over a decade to our theoretical understanding of the retrieval process. On a more practical level, they have contributed design criteria for operating retrieval systems. The following selection presents one of the early descriptions of the SMART system; it is valuable as it shows the direction automatic retrieval methods were to take beyond simple word-matching techniques. These include various word normalization techniques to improve recall, for instance, the separation of words into stems and affixes; the correlation and clustering, using statistical association measures, of related terms; and the identification, using a concept thesaurus, of synonymous, broader, narrower, and sibling terms. They include, as weIl, techniques, both linguistic and statistical, to deal with the thorny problem of how to automatically extract from texts index terms that consist of more than one word. They include weighting techniques and various documentrequest matching algorithms. Significant among the latter are those which produce a retrieval output of citations ranked in relevante order. During the 1970s, Salton and his students went an to further refine these various techniques, particularly the weighting and statistical association measures. Many of their early innovations seem commonplace today. Some of their later techniques are still ahead of their time and await technological developments for implementation. The particular focus of the selection that follows is an the evaluation of a particular component of the SMART system, a multilingual thesaurus. By mapping English language expressions and their German equivalents to a common concept number, the thesaurus permitted the automatic processing of German language documents against English language queries and vice versa. The results of the evaluation, as it turned out, were somewhat inconclusive. However, this SMART experiment suggested in a bold and optimistic way how one might proceed to answer such complex questions as What is meant by retrieval language compatability? How it is to be achieved, and how evaluated?
  13. Rees-Potter, L.K.: Dynamic thesaural systems : a bibliometric study of terminological and conceptual change in sociology and economics with application to the design of dynamic thesaural systems (1989) 0.03
    0.02973371 = product of:
      0.05946742 = sum of:
        0.027477724 = weight(_text_:to in 5059) [ClassicSimilarity], result of:
          0.027477724 = score(doc=5059,freq=8.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.32138905 = fieldWeight in 5059, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=5059)
        0.031989697 = product of:
          0.063979395 = sum of:
            0.063979395 = weight(_text_:language in 5059) [ClassicSimilarity], result of:
              0.063979395 = score(doc=5059,freq=2.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.34677336 = fieldWeight in 5059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5059)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Thesauri have been used in the library and information science field to provide a standard descriptor language for indexers or searchers to use in an informations storage and retrieval system. One difficulty has been the maintenance and updating of thesauri since terms used to describe concepts change over time and vary between users. This study investigates a mechanism by which thesauri can be updated and maintained using citation, co-citation analysis and citation context analysis.
  14. Rau, L.F.; Jacobs, P.S.; Zernik, U.: Information extraction and text summarization using linguistic knowledge acquisition (1989) 0.03
    0.029489564 = product of:
      0.058979128 = sum of:
        0.013738862 = weight(_text_:to in 6683) [ClassicSimilarity], result of:
          0.013738862 = score(doc=6683,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.16069452 = fieldWeight in 6683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=6683)
        0.045240264 = product of:
          0.09048053 = sum of:
            0.09048053 = weight(_text_:language in 6683) [ClassicSimilarity], result of:
              0.09048053 = score(doc=6683,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4904116 = fieldWeight in 6683, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6683)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Storing and accessing texts in a conceptual format has a number of advantages over traditional document retrieval methods. A conceptual format facilitates natural language access to text information. It can support imprecise and inexact queries, conceptual information summarisation, and, ultimately, document translation. Describes 2 methods which have been implemented in a prototype intelligent information retrieval system calles SCISOR (System for Conceptual Information Summarisation, Organization and Retrieval). Describes the text processing, language acquisition, and summarisation components of SCISOR
  15. Mayer, H.: ¬Das internationale Esperanto-Museum : Sammlung für Plansprachen (1989) 0.03
    0.029489564 = product of:
      0.058979128 = sum of:
        0.013738862 = weight(_text_:to in 1607) [ClassicSimilarity], result of:
          0.013738862 = score(doc=1607,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.16069452 = fieldWeight in 1607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=1607)
        0.045240264 = product of:
          0.09048053 = sum of:
            0.09048053 = weight(_text_:language in 1607) [ClassicSimilarity], result of:
              0.09048053 = score(doc=1607,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4904116 = fieldWeight in 1607, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1607)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Traces the history of the International Esperanto Museum in the Australian National Library from its origin in 1927 as a private collection, showing how Esperanto is now classified as a blueprint. Discusses its place in language evolution and its relationship to the DDC, as well as its links with algorithmic languages, automatic language translation with Esperanto as a catalyst and the museum's role in research
  16. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.03
    0.029489564 = product of:
      0.058979128 = sum of:
        0.013738862 = weight(_text_:to in 2494) [ClassicSimilarity], result of:
          0.013738862 = score(doc=2494,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.16069452 = fieldWeight in 2494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=2494)
        0.045240264 = product of:
          0.09048053 = sum of:
            0.09048053 = weight(_text_:language in 2494) [ClassicSimilarity], result of:
              0.09048053 = score(doc=2494,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4904116 = fieldWeight in 2494, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.
  17. Asija, S.P.: Natural language interface without artifical intelligence (1989) 0.03
    0.029489564 = product of:
      0.058979128 = sum of:
        0.013738862 = weight(_text_:to in 5711) [ClassicSimilarity], result of:
          0.013738862 = score(doc=5711,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.16069452 = fieldWeight in 5711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=5711)
        0.045240264 = product of:
          0.09048053 = sum of:
            0.09048053 = weight(_text_:language in 5711) [ClassicSimilarity], result of:
              0.09048053 = score(doc=5711,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4904116 = fieldWeight in 5711, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5711)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    SWIFT-ANSWER (Special Word Indexed Full Text Alpha Numeric Storage With Easy Retrieval) is a natural language interface that allows searchers to communicate with the computer in their own languages. The system operates without the need for artificial intelligence.
  18. Studwell, W.E.: Subject suggestions 4 : some concerns relating to literature and language (1989) 0.03
    0.029489564 = product of:
      0.058979128 = sum of:
        0.013738862 = weight(_text_:to in 424) [ClassicSimilarity], result of:
          0.013738862 = score(doc=424,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.16069452 = fieldWeight in 424, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0625 = fieldNorm(doc=424)
        0.045240264 = product of:
          0.09048053 = sum of:
            0.09048053 = weight(_text_:language in 424) [ClassicSimilarity], result of:
              0.09048053 = score(doc=424,freq=4.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4904116 = fieldWeight in 424, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=424)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Four policy proposals are presented which affect LC's subject headings for literature and language: a system of period subdivisions for use under those literatures which lack them; a clear definition of the term "Philology"; subjects for individual radio, TV, and movie scripts; and a clear relationship between literature and folklore.
  19. Crystal, D.: Linguistics and indexing (1984) 0.03
    0.028580349 = product of:
      0.057160698 = sum of:
        0.017173579 = weight(_text_:to in 1003) [ClassicSimilarity], result of:
          0.017173579 = score(doc=1003,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.20086816 = fieldWeight in 1003, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.078125 = fieldNorm(doc=1003)
        0.03998712 = product of:
          0.07997424 = sum of:
            0.07997424 = weight(_text_:language in 1003) [ClassicSimilarity], result of:
              0.07997424 = score(doc=1003,freq=2.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4334667 = fieldWeight in 1003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1003)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, linguistics has developed a way of looking at language which may offer some insights to the indexer. Three main stages of inquiry are identified: observational, intuitional and evaluative. It is suggested that evaluative discussion of indexes is dependent on prior research at the observational and intuitional stages
  20. Smith, J.M.: ¬The Standard Generalized Markup Language (SGML) : guidelines for editors and publishers (1987) 0.03
    0.028580349 = product of:
      0.057160698 = sum of:
        0.017173579 = weight(_text_:to in 5941) [ClassicSimilarity], result of:
          0.017173579 = score(doc=5941,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.20086816 = fieldWeight in 5941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.078125 = fieldNorm(doc=5941)
        0.03998712 = product of:
          0.07997424 = sum of:
            0.07997424 = weight(_text_:language in 5941) [ClassicSimilarity], result of:
              0.07997424 = score(doc=5941,freq=2.0), product of:
                0.18449916 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.04702661 = queryNorm
                0.4334667 = fieldWeight in 5941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5941)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Guidelines for editors and publishers of scholarly texts to which markup has been added in accordance with the SGML

Authors

Languages

  • e 811
  • d 78
  • f 7
  • m 7
  • nl 3
  • dk 1
  • p 1
  • More… Less…

Types

  • a 754
  • m 87
  • s 44
  • r 9
  • ? 4
  • b 4
  • d 4
  • n 4
  • el 2
  • u 2
  • x 2
  • h 1
  • p 1
  • More… Less…

Themes

Subjects

Classifications