A First Course in Information Theory pp 41-59 | Cite as

# Zero-Error Data Compression

Chapter

- 594 Downloads

## Abstract

In a random experiment, a coin is tossed
where O ≤

*n*times. Let*X*_{ i }be the outcome of the*i*th toss, with$$\Pr \left\{ {{X_i} = HEAD} \right\} = p\;and\;\Pr \left\{ {{X_i} = TAIL} \right\} = 1 - p,$$

(3.1)

*p*≤ 1. It is assumed that*X*_{ i }are i.i.d., and the value of*p*is known. We are asked to describe the outcome of the random experiment without error (with zero error) by using binary symbols. One way to do this is to encode a HEAD by a ‘0’ and a TAIL by a ‘1.’ Then the outcome of the random experiment is encoded into a binary codeword of length*n*. When the coin is fair, i.e.,*p*= 0.5, this is the best we can do because the probability of every outcome of the experiment is equal to 2^{−n}. In other words, all the outcomes are equally likely## Keywords

Internal Node Optimal Code Code Tree Code Symbol Huffman Code
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## Preview

Unable to display preview. Download preview PDF.

## Copyright information

© Springer Science+Business Media New York 2002