Advertisement

Performance of Forward Error-Correction Systems

  • William Turin
Chapter
  • 337 Downloads
Part of the Information Technology: Transmission, Processing and Storage book series (PSTE)

Abstract

In one-way systems the information flow is strictly unidirectional: from transmitter to receiver. Such systems can be described in terms of their input and output process characterization. However, in practical applications, only certain characteristics of this process are usually considered. Some basic performance characteristics that are used for comparing these systems are:

P s the symbol-error probability on a decoder output

P* the symbol-erasure probability (the probability of receiving a symbol with detected errors)

Pc the probability of receiving a message without errors

Pu the probability of receiving a message with undetected errors

Pd the probability of receiving a message with detected errors (obviously)

t d the average path delay

R the average information rate (the average ratio of the number of informationsymbols to the total number of transmitted symbols)

P(EFS) the average percent of error-free seconds (the average percentage of the one-second intervals that do not have errors)

Keywords

Code Word Cyclic Code Convolutional Code Viterbi Algorithm Channel Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    L. R. Bahl, J. Cocke, F. Jelinek,and J. Raviv,Optimal decoding of linear codes for minimizing symbol error rate,IEEE Trans. Inform. Theory IT-20 284–287 (1974).MathSciNetCrossRefGoogle Scholar
  2. 2.
    E. R. Berlekamp, Algebraic Coding Theory, (McGraw-Hill, New York, 1968).zbMATHGoogle Scholar
  3. 3.
    E. R. Berlekamp, Interleaved Coding for Bursty Channels, Final Project Report, NSF Program SBIR-1982 Phase 1.Google Scholar
  4. 4.
    C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: turbo-codes,” IEEE Trans. Commun., COM-44 (10), 1261–1271 (1996).CrossRefGoogle Scholar
  5. 5.
    M. R. Best, M. V. Bumashev, Y. Levy, A. Rabinovich, P. C. Fishburn, A. R. Calderbank, and D. J. Costello, Jr., “On a technique to calculate the exact performance of a convolutional code,” IEEE Trans. Inf. Theory, 41 (2),441–447 (1995).zbMATHCrossRefGoogle Scholar
  6. 6.
    A. R. Calderbank and P. C. Fishburn, “The normalized second moment of the binary lattice determined by a convolutional code,” IEEE Trans. Inf. Theory, 40 (1)166–174 (1994).MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    E. L. Bloch and V. V. Zyablov, Generalized Concatenated Codes (in Russian, Sviaz Publishers, Moscow, 1976).Google Scholar
  8. 8.
    S. C. Chang and J. K. Wolf, “A simple derivation of the MacWilliams’ identity for linear codes,” IEEE Trans. Inf. Theory, IT-26 (4) 476–477 (1980).MathSciNetCrossRefGoogle Scholar
  9. 9.
    W. Feller, An Introduction to Probability Theory and Its Applications, 2 (John Wiley & Sons, New York, 1971).zbMATHGoogle Scholar
  10. 10.
    G. D. Forney, Jr.,“The Viterbi algorithm,”Proc. IEEE, 61 (3)268–278 (1973).MathSciNetCrossRefGoogle Scholar
  11. 11.
    G. D. Forney, Concatenated Codes, Research Monograph 37, (MIT Press, Cambridge, Massachusetts, 1966).Google Scholar
  12. 12.
    B. A. Fuchs and B. V. Shabat, Functions of a Complex Variable, 1 (Addison-Wesley Publishing Co., Reading, Massachusetts, 1964).zbMATHGoogle Scholar
  13. 13.
    R. G. Gallager, Information Theory and Reliable Communication, (John Wiley & Sons, New York, 1968).zbMATHGoogle Scholar
  14. 14.
    A. Gill, Linear Sequential Circuits, (McGraw-Hill, New York, 1967).Google Scholar
  15. 15.
    J. Hagenauer, E. Offer, and L. Parke, “Iterative decoding of binary block and convolutional codes,” IEEE Trans. Inf. Theory, IT-42 (2), pp. (1996).Google Scholar
  16. 16.
    T. Johansson and K. Zigangirov, “A simple one sweep algorithm for optimal APP symbol decoding of linear block codes,” In Proc. IEEE Intern. Symp. on Inf. Theory, Cambridge, Massachusetts, August 1998, p. 231.Google Scholar
  17. 17.
    J. Justesen, “On the complexity of decoding Reed-Solomon codes,” IEEE Trans. Inform. Theory, IT-22, 237–238 (1976).MathSciNetCrossRefGoogle Scholar
  18. 18.
    P. Ligdas, W. Turin, and N. Seshadri, “Statistical methods for speech transmission using hidden Markov models,” In Proc. 31th Conf. on Information Sciences and Systems, Princeton, New Jersey, 546–551, 1997.Google Scholar
  19. 19.
    F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, (North-Holland, Amsterdam, 1977).zbMATHGoogle Scholar
  20. 20.
    F. J. MacWilliams, “A theorem on the distribution of weights in a systematic code,” Bell System Tech. J.,42, 79–84 (1963).MathSciNetGoogle Scholar
  21. 21.
    J. L. Massey, Threshold Decoding, (MIT Press, Cambridge, Massachusetts, 1963).Google Scholar
  22. 22.
    N. Merhav and Y. Ephraim, “Hidden Markov modeling using a dominant state sequence with applications to speech recognition,” Computer, Speech and Language, 5 327–339 (1991).CrossRefGoogle Scholar
  23. 23.
    T. N. Morrissey, Jr., “Analysis of decoders for convolutional codes by stochastic sequential machine methods,” IEEE Trans. Inform. Theory,IT-16 460–469 (1970).MathSciNetCrossRefGoogle Scholar
  24. 24.
    F. Oberhettinger and L. Badii, Tables of Laplace Transform, (Springer-Verlag, New York, 1973, p. 16, eq. 2.34).CrossRefGoogle Scholar
  25. 25.
    M. Sajadieh, F.R. Kschischang, and A. Leon-Garcia, “A block memory model for correlated Rayleigh fading channels,” In Proc. IEEE Int. Conf. Commun., IT-23 June 1996, pp. 282–286.Google Scholar
  26. 26.
    D. V. Sarwate, “On the complexity of decoding Goppa codes,” IEEE Trans. Inform. Theory, IT-22 515–516 (1976).MathSciNetGoogle Scholar
  27. 27.
    W. W. Peterson and E. J. Weldon, Jr., Error-Correcting Codes,(The MIT Press, Cambridge, Massachusetts, 1961).Google Scholar
  28. 28.
    K. A. Post, “Explicit evaluation of Viterbi’s union bounds on convolutional code performance for the binary symmetric channel,” IEEE Trans. Inform. Theory, IT-23 403–404 (1977).CrossRefGoogle Scholar
  29. 29.
    W. K. Pratt, Digital Image Processing, (John Wiley & Sons, New York, 1978).Google Scholar
  30. 30.
    Y. Sugiyama, M. Kasahara, S. Hirasawa, and T. Namekawa, “A method of solving key equation for decoding Goppa codes,” Information and Control, 27, 173–180 (1975).MathSciNetCrossRefGoogle Scholar
  31. 31.
    A. S. Tanenbaum, Computer Networks, (Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1981).Google Scholar
  32. 32.
    W. Turin, “Union bounds on Viterbi algorithm performance,”AT&T Tech. Journ., 64 (10), 2375–2385 (1985).MathSciNetGoogle Scholar
  33. 33.
    W. Turin and R. van Nobelen, “Hidden Markov modeling of flat fading channels,” IEEE J. Select. Areas Commun., 16 (9)1809–1817 (1998).CrossRefGoogle Scholar
  34. 34.
    A. J. Viterbi, “Error bounds for convolutional codes and asymptotically optimum decoding algorithm,” IEEE Trans. Inform. Theory, IT-12, 260–269 (1967).CrossRefGoogle Scholar
  35. 35.
    H.S. Wang and P.-C. Chang, “On verifying the first-order Markovian assumption for a Rayleigh fading channel model,” IEEE Trans. Veh. Technol., 45 353–357 (1996).CrossRefGoogle Scholar
  36. 36.
    H.S. Wang and N. Moayeri, “Finite-state Markov channel — a useful model for radio communication channels,” IEEE Trans. Veh. Technol., 44 163–171 (1995).CrossRefGoogle Scholar
  37. 37.
    J. K. Wolf, “Decoding of Bose-Chaudhuri-Hocquenghem codes and Prony’s method of curve fitting,” EEE Trans. Inform. Theory, IT-13, 608–608 (1967).CrossRefGoogle Scholar
  38. 38.
    J. K. Wolf, “Efficient maximum likelihood decoding od linear block codes using trellis,” IEEE Trans. Inform. Theory, IT-24, 76–80 (1978).CrossRefGoogle Scholar
  39. 39.
    W. W. Wu, “New convolutional codes — Part I,” IEEE Trans. Commun., COM-23 442–456 (1975).Google Scholar
  40. 40.
    W. W. Wu, “New convolutional codes — Part II,” IEEE Trans. Commun.,COM-24 19–32 (1976).CrossRefGoogle Scholar
  41. 41.
    W. W. Wu, “New convolutional codes — Part III,” IEEE Trans. Commun., COM-24 946–955 (1976).CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • William Turin
    • 1
  1. 1.AT&T Labs—ResearchFlorham ParkNew JerseyUSA

Personalised recommendations