[{"@context":"http:\/\/schema.org\/","@type":"BlogPosting","@id":"https:\/\/wiki.edu.vn\/en\/error-exponent-wikipedia\/#BlogPosting","mainEntityOfPage":"https:\/\/wiki.edu.vn\/en\/error-exponent-wikipedia\/","headline":"Error exponent – Wikipedia","name":"Error exponent – Wikipedia","description":"before-content-x4 In information theory, the error exponent of a channel code or source code over the block length of the","datePublished":"2014-02-01","dateModified":"2014-02-01","author":{"@type":"Person","@id":"https:\/\/wiki.edu.vn\/en\/author\/lordneo\/#Person","name":"lordneo","url":"https:\/\/wiki.edu.vn\/en\/author\/lordneo\/","image":{"@type":"ImageObject","@id":"https:\/\/secure.gravatar.com\/avatar\/cd810e53c1408c38cc766bc14e7ce26a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/cd810e53c1408c38cc766bc14e7ce26a?s=96&d=mm&r=g","height":96,"width":96}},"publisher":{"@type":"Organization","name":"Enzyklop\u00e4die","logo":{"@type":"ImageObject","@id":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","url":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","width":600,"height":60}},"image":{"@type":"ImageObject","@id":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/28c446928dc3e0d6462130c3fccd15cd9e80f25a","url":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/28c446928dc3e0d6462130c3fccd15cd9e80f25a","height":"","width":""},"url":"https:\/\/wiki.edu.vn\/en\/error-exponent-wikipedia\/","wordCount":17506,"articleBody":" (adsbygoogle = window.adsbygoogle || []).push({});before-content-x4In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error Perror{displaystyle P_{mathrm {error} }} (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4 of a decoder drops as e\u2212n\u03b1{displaystyle e^{-nalpha }}, where n{displaystyle n} is the block length, the error exponent is (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4\u03b1{displaystyle alpha }. In this example, \u2212ln\u2061Perrorn{displaystyle {frac {-ln P_{mathrm {error} }}{n}}} approaches \u03b1{displaystyle alpha } for large n{displaystyle n}. Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity. (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4Table of ContentsError exponent in channel coding[edit]For time-invariant DMC’s[edit]Error exponent in source coding[edit]For time invariant discrete memoryless sources[edit]See also[edit]References[edit]Error exponent in channel coding[edit]For time-invariant DMC’s[edit]The channel coding theorem states that for any \u03b5 > 0 and for any rate less than the channel capacity, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than \u03b5 > 0 for sufficiently long message block X. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.Assuming a channel coding setup as follows: the channel can transmit any of M=2nR{displaystyle M=2^{nR};} messages, by transmitting the corresponding codeword (which is of length n). Each component in the codebook is drawn i.i.d. according to some probability distribution with probability mass function Q. At the decoding end, maximum likelihood decoding is done.Let Xin{displaystyle X_{i}^{n}} be the i{displaystyle i}th random codeword in the codebook, where i{displaystyle i} goes from 1{displaystyle 1} to M{displaystyle M}. Suppose the first message is selected, so codeword X1n{displaystyle X_{1}^{n}} is transmitted. Given that y1n{displaystyle y_{1}^{n}} is received, the probability that the codeword is incorrectly detected as X2n{displaystyle X_{2}^{n}} is:"},{"@context":"http:\/\/schema.org\/","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/#breadcrumbitem","name":"Enzyklop\u00e4die"}},{"@type":"ListItem","position":2,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/error-exponent-wikipedia\/#breadcrumbitem","name":"Error exponent – Wikipedia"}}]}]