Advertisement

Zero-Error Data Compression

  • Raymond W. Yeung
Chapter
  • 594 Downloads
Part of the Information Technology: Transmission, Processing and Storage book series (PSTE)

Abstract

In a random experiment, a coin is tossed n times. Let X i be the outcome of the ith toss, with
$$\Pr \left\{ {{X_i} = HEAD} \right\} = p\;and\;\Pr \left\{ {{X_i} = TAIL} \right\} = 1 - p,$$
(3.1)
where O ≤ p ≤ 1. It is assumed that X i are i.i.d., and the value of p is known. We are asked to describe the outcome of the random experiment without error (with zero error) by using binary symbols. One way to do this is to encode a HEAD by a ‘0’ and a TAIL by a ‘1.’ Then the outcome of the random experiment is encoded into a binary codeword of length n . When the coin is fair, i.e., p = 0.5, this is the best we can do because the probability of every outcome of the experiment is equal to 2n. In other words, all the outcomes are equally likely

Keywords

Internal Node Optimal Code Code Tree Code Symbol Huffman Code 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2002

Authors and Affiliations

  • Raymond W. Yeung
    • 1
  1. 1.The Chinese University of Hong KongHong Kong

Personalised recommendations