Search (1 results, page 1 of 1)

  • × author_ss:"Korthof, G."
  • × theme_ss:"Information"
  1. Korthof, G.: Information Content, Compressibility and Meaning : Published: 18 June 2000. Updated 31 May 2006. Postscript 20 Oct 2009. (2000) 0.01
    0.0059357807 = product of:
      0.023743123 = sum of:
        0.023743123 = product of:
          0.047486246 = sum of:
            0.047486246 = weight(_text_:software in 4245) [ClassicSimilarity], result of:
              0.047486246 = score(doc=4245,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.2629875 = fieldWeight in 4245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4245)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In New Scientist issue 18 Sept 1999, "Life force" pp27-30 Paul Davies writes "an apparently random sequence such as 110101001010010111... cannot be condensed into a simple set of instructions, so it has a high information content." (p29). This notion of 'information content' leads to paradoxes. Consider random number generator software. Let it generate 100 and 1000 random numbers. According to the above definition the second sequence of numbers has an information content ten times higher than the first, because its description would be ten times longer. However they are both generated by the same simple set of instructions, so should have exactly the same 'information content'. There is the paradox. It seems clear that this measure of 'information content' misses the point. It measures compressibility of a sequence, not 'information content'. One needs meaning of a sequence to capture information content.