Search (1 results, page 1 of 1)

  • × author_ss:"Narasimhamurthi, N."
  • × theme_ss:"Information"
  • × year_i:[1990 TO 2000}
  1. Narasimhamurthi, N.: ¬A primer on information theory (1996) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 278) [ClassicSimilarity], result of:
              0.05630404 = score(doc=278,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=278)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents, at an elementary level, the central ideas of information theory in terms of naive probability theory. Information theory rests on the 2 pillars of quantifying uncertainty and coding and deals with how best to code information so that its uncertainty can be reduced most efficiently. Develops the quantification of uncertainty in 2 steps: first, the special case when all possible outcomes have the same probability; and second, the general case when all possibilities need not be equally probable. Examines 2 forms of coding: proactive and reactive. Coding is essentially an artificial classification or indexing of data and this aspect of information theory is of interest to anyone classifying, storing, and disseminating information