Search (38 results, page 1 of 2)

  • × language_ss:"e"
  • × type_ss:"a"
  • × year_i:[1970 TO 1980}
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.59
    0.59411585 = product of:
      1.2731054 = sum of:
        0.067005545 = product of:
          0.20101663 = sum of:
            0.20101663 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.20101663 = score(doc=230,freq=2.0), product of:
                0.26825202 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031640913 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.20101663 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20101663 = score(doc=230,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.20101663 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20101663 = score(doc=230,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.20101663 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20101663 = score(doc=230,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.20101663 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20101663 = score(doc=230,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.20101663 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20101663 = score(doc=230,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.20101663 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20101663 = score(doc=230,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.46666667 = coord(7/15)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Rijsbergen, C.J. van: Foundations of evaluation (1974) 0.01
    0.008201527 = product of:
      0.12302291 = sum of:
        0.12302291 = weight(_text_:evaluation in 1078) [ClassicSimilarity], result of:
          0.12302291 = score(doc=1078,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.9269066 = fieldWeight in 1078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.15625 = fieldNorm(doc=1078)
      0.06666667 = coord(1/15)
    
  3. Garfield, E.: Is citation analysis a legitime evaluation tool? (1979) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 1086) [ClassicSimilarity], result of:
          0.098418325 = score(doc=1086,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 1086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=1086)
      0.06666667 = coord(1/15)
    
  4. Keen, E.M.: Prospects for classification suggested by evaluation tests (1976) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 1277) [ClassicSimilarity], result of:
          0.098418325 = score(doc=1277,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 1277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=1277)
      0.06666667 = coord(1/15)
    
  5. Cleverdon, C.W.: Evaluation tests of information retrieval systems (1970) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 2272) [ClassicSimilarity], result of:
          0.098418325 = score(doc=2272,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 2272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=2272)
      0.06666667 = coord(1/15)
    
  6. Garfield, E.: Citation analysis as a tool in journal evaluation (1972) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 2831) [ClassicSimilarity], result of:
          0.098418325 = score(doc=2831,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 2831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=2831)
      0.06666667 = coord(1/15)
    
  7. Harter, S.P.: ¬The Cranfield II relevance assessments : a critical evaluation (1971) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 5364) [ClassicSimilarity], result of:
          0.098418325 = score(doc=5364,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 5364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=5364)
      0.06666667 = coord(1/15)
    
  8. Cooper, W.S.: ¬The paradoxal role of unexamined documents in the evaluation of retrieval effectiveness (1976) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 2186) [ClassicSimilarity], result of:
          0.098418325 = score(doc=2186,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=2186)
      0.06666667 = coord(1/15)
    
  9. Lancaster, F.W.; Gillespie, C.J.: Design and evaluation of information systems (1970) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 243) [ClassicSimilarity], result of:
          0.098418325 = score(doc=243,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=243)
      0.06666667 = coord(1/15)
    
  10. Cleverdon, C.W.: Design and evaluation of information systems (1971) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 248) [ClassicSimilarity], result of:
          0.098418325 = score(doc=248,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=248)
      0.06666667 = coord(1/15)
    
  11. Debons, A.; Montgomery, K.L.: Design and evaluation of information systems (1974) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 260) [ClassicSimilarity], result of:
          0.098418325 = score(doc=260,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=260)
      0.06666667 = coord(1/15)
    
  12. Swanson, R.W.: Design and evaluation of information systems (1975) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 263) [ClassicSimilarity], result of:
          0.098418325 = score(doc=263,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=263)
      0.06666667 = coord(1/15)
    
  13. Stern, B.T.: Evaluation and design of bibliographic data bases (1977) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 268) [ClassicSimilarity], result of:
          0.098418325 = score(doc=268,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=268)
      0.06666667 = coord(1/15)
    
  14. Yang, C.-S.: Design and evaluation of file structures (1978) 0.01
    0.006561222 = product of:
      0.098418325 = sum of:
        0.098418325 = weight(_text_:evaluation in 276) [ClassicSimilarity], result of:
          0.098418325 = score(doc=276,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.125 = fieldNorm(doc=276)
      0.06666667 = coord(1/15)
    
  15. Bruner, J.: From communication to language (1975) 0.01
    0.006027185 = product of:
      0.09040777 = sum of:
        0.09040777 = weight(_text_:soziale in 1635) [ClassicSimilarity], result of:
          0.09040777 = score(doc=1635,freq=6.0), product of:
            0.19331455 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031640913 = queryNorm
            0.4676718 = fieldWeight in 1635, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.03125 = fieldNorm(doc=1635)
      0.06666667 = coord(1/15)
    
    Abstract
    Bruner war der erste Erforscher des Spracherwerbs von Kindern, der Wittgensteins Problem zu würdigen wußte und eine Antwort darauf vorschlug. Wittgensteins allgemeinem Ansatz folgend, behauptete Bruner, daß das Kind den konventionellen Gebrauch eines sprachlichen Symbols dadurch erwirbt, daß es lernt, an einer Interaktionsform (Lebensform, Szene gemeinsamer Aufmerksamkeit) teilzunehmen, die es zunächst nichtsprachlich versteht, so daß die Sprache des Erwachsenen in geteilten Erfahrungen verankert werden kann, deren soziale Bedeutung es schon begreift. Eine Schlüsselkomponente dieses Prozesses ist zunächst ein Kind, das Erwachsene als intentionale Wesen auffassen kann, so daß es in bestimmten Kontexten seine Aufmerksamkeit mit ihnen teilen kann. Eine andere Komponente ist jedoch die bereits existierende, äußere soziale Welt, in der das Kind lebt. Um Sprache zu erwerben, muß das Kind in einer Welt leben, die strukturierte soziale Tätigkeiten aufweist, die es verstehen kann, so wie unser hypothetischer Besucher Ungarns das Kaufen von Fahrkarten und das Reisen mit Zügen verstand. Für Kinder bedeutet das häufig die Wiederkehr derselben routinemäßigen, allgemeinen Aktivität, so daß sie erkennen können, wie diese Aktivität aufgebaut ist und wie die verschiedenen sozialen Rollen in ihr funktionieren. Wenn wir am Spracherwerb interessiert sind, muß der Erwachsene außerdem ein neues sprachliches Symbol auf eine solche Weise verwenden, die das Kind als relevant für die gemeinsame Tätigkeit erkennen kann (nämlich im Gegensatz zur unvermittelten Ansprache des Ungarn auf dem Bahnhof). Wenn ein Kind in eine Welt geboren werden würde, in der dieselbe Art von Ereignis nie wiederkehrte, derselbe Gegenstand nie zweimal erschiene und Erwachsene nie dieselben Ausdrücke im selben Kontext verwendeten, dann würde im allgemeinen schwer zu sehen sein, wie dieses Kind eine natürliche Sprache erwerben könnte, welche kognitiven Fähigkeiten es auch immer haben möge. Eine Reihe von Untersuchungen hat gezeigt, daß Kinder nach ersten Fortschritten beim Spracherwerb neue Wörter am besten in Szenen gemeinsamer Aufmerksamkeit lernen. Oft handelt es sich dabei um solche, die in ihrer täglichen Erfahrung wiederkehren, wie Baden, Füttern, Windelwechseln, Vorlesen und Autofahren. Diese Tätigkeiten sind in vielen Hinsichten analog zu dem Szenario des Fahrkartenkaufs auf einem Bahnhof, insofern das Kind seine eigenen und die Ziele des Erwachsenen in der jeweiligen Situation versteht, was ihm ermöglicht, die Relevanz des Sprachverhaltens des Erwachsenen für diese Ziele zu erschließen. So stellten Tomasello und Todd fest, daß Kinder, die mit ihren Müttern längere Zeit bei Tätigkeiten gemeinsamer Aufmerksamkeit im Alter zwischen zwölf und achtzehn Monaten verbrachten, mit achtzehn Monaten ein größeres Vokabular hatten. Bei der Sprachverwendung Erwachsener innerhalb dieser Szenen gemeinsamer Aufmerksamkeit fanden Tomasello und Farrar sowohl korrelative als auch experimentelle Belege für die Hypothese, daß Mütter, die Sprache beim Versuch verwendeten, der Aufmerksamkeit ihrer Kinder zu folgen (d. h. über einen Gegenstand zu sprechen, der schon im Brennpunkt des Interesses und der Aufmerksamkeit des Kindes stand), Kinder mit einem größeren Vokabular hatten als Mütter, die Sprache beim Versuch verwendeten, die Aufmerksamkeit des Kindes auf etwas Neues zu lenken.
  16. Sparck Jones, K.: Some thoughts on classification for retrieval (1970) 0.01
    0.005174066 = product of:
      0.038805492 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 4327) [ClassicSimilarity], result of:
              0.01609953 = score(doc=4327,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 4327, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4327)
          0.5 = coord(1/2)
        0.030755727 = weight(_text_:evaluation in 4327) [ClassicSimilarity], result of:
          0.030755727 = score(doc=4327,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 4327, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4327)
      0.13333334 = coord(2/15)
    
    Abstract
    The suggestion that classifications for retrieval should be constructed automatically raises some serious problems concerning the sorts of classification which are required, and the way in which formal classification theories should be exploited, given that a retrieval classification is required for a purpose. These difficulties have not been sufficiently considered, and the paper therefore attempts an analysis of them, though no solution of immediate application can be suggested. Starting with the illustrative proposition that a polythetic, multiple, unordered classification is required in automatic thesaurus construction, this is considered in the context of classification in general, where eight sorts of classification can be distinguished, each covering a range of class definitions and class-finding algorithms. The problem which follows is that since there is generally no natural or best classification of a set of objects as such, the evaluation of alternative classifications requires either formal criteria of goodness of fit, or, if a classification is required for a purpose, a precises statement of that purpose. In any case a substantive theory of classification is needed, which does not exist; and since sufficiently precise specifications of retrieval requirements are also lacking, the only currently available approach to automatic classification experiments for information retrieval is to do enough of them
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. Ryans, C.C.: ¬A study of errors found in non-marc cataloging in a machine-assisted system (1978) 0.00
    0.003881812 = product of:
      0.058227178 = sum of:
        0.058227178 = product of:
          0.116454355 = sum of:
            0.116454355 = weight(_text_:analyse in 1186) [ClassicSimilarity], result of:
              0.116454355 = score(doc=1186,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.6985484 = fieldWeight in 1186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1186)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Analyse von Katalogisierungsfehlern anhand von 700 Titelaufnahmen in der OCLC Datenbank
  18. Dahlberg, I.: Classification theory, yesterday and today (1976) 0.00
    0.003280611 = product of:
      0.049209163 = sum of:
        0.049209163 = weight(_text_:evaluation in 1618) [ClassicSimilarity], result of:
          0.049209163 = score(doc=1618,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.37076265 = fieldWeight in 1618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=1618)
      0.06666667 = coord(1/15)
    
    Abstract
    Until very recently, classification theory was held to be nothing but an expressed or unconscious knowledge framed in intuitively given reasons for the subdivision and arrangement of any universe. Today, after clarification of the elements of classification systems as well as the basis of concept relationshios it is possible to apply a number of principles in the evaluation of existing systems as well as in the construction of new ones and by this achieving relatively predictable and repeatable results
  19. Perreault, J.M.: Some problems in the BSO (1979) 0.00
    0.003280611 = product of:
      0.049209163 = sum of:
        0.049209163 = weight(_text_:evaluation in 1865) [ClassicSimilarity], result of:
          0.049209163 = score(doc=1865,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.37076265 = fieldWeight in 1865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=1865)
      0.06666667 = coord(1/15)
    
    Abstract
    This critical analysis of the BSO draft covers its main structural characteristics from the standpoint of its convenient utilisation, style, terminology, notation, polygraphic making. The analysis concludes that individual shortcomings inherent to the scheme don't exclude its positive evaluation on the whole as a more perfect scheme compared to the Library of Congree Classification and Colon Classification
  20. Stokolova, N.A.: Syntactic tools and semantic power of information languages : Pt.2 of 'Elements of a semantic theory of information retrieval' (1976) 0.00
    0.003280611 = product of:
      0.049209163 = sum of:
        0.049209163 = weight(_text_:evaluation in 1888) [ClassicSimilarity], result of:
          0.049209163 = score(doc=1888,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.37076265 = fieldWeight in 1888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=1888)
      0.06666667 = coord(1/15)
    
    Abstract
    Different kinds of syntactic tools of information languages (IL) in use, considered as meaning-distinguished tools, are described as simplified forms of some initial IL grammar tools called 'standard phrases' which are n-place relational predicates of a special kind. A quantitative evaluation is attempted of the effects which the idiosyncracies of the syntactic tools of IL's have on their semantic power