[{"@context":"http:\/\/schema.org\/","@type":"BlogPosting","@id":"https:\/\/wiki.edu.vn\/en\/wiki6\/noisy-channel-coding-theorem-wikipedia\/#BlogPosting","mainEntityOfPage":"https:\/\/wiki.edu.vn\/en\/wiki6\/noisy-channel-coding-theorem-wikipedia\/","headline":"Noisy-channel coding theorem – Wikipedia","name":"Noisy-channel coding theorem – Wikipedia","description":"before-content-x4 Limit on data transfer rate “Shannon’s theorem” redirects here. Shannon’s name is also associated with the sampling theorem. after-content-x4","datePublished":"2014-01-23","dateModified":"2014-01-23","author":{"@type":"Person","@id":"https:\/\/wiki.edu.vn\/en\/wiki6\/author\/lordneo\/#Person","name":"lordneo","url":"https:\/\/wiki.edu.vn\/en\/wiki6\/author\/lordneo\/","image":{"@type":"ImageObject","@id":"https:\/\/secure.gravatar.com\/avatar\/44a4cee54c4c053e967fe3e7d054edd4?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/44a4cee54c4c053e967fe3e7d054edd4?s=96&d=mm&r=g","height":96,"width":96}},"publisher":{"@type":"Organization","name":"Enzyklop\u00e4die","logo":{"@type":"ImageObject","@id":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","url":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","width":600,"height":60}},"image":{"@type":"ImageObject","@id":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/fffc6476f6ba31efec56b837c6fa92d39d097495","url":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/fffc6476f6ba31efec56b837c6fa92d39d097495","height":"","width":""},"url":"https:\/\/wiki.edu.vn\/en\/wiki6\/noisy-channel-coding-theorem-wikipedia\/","wordCount":12199,"articleBody":" (adsbygoogle = window.adsbygoogle || []).push({});before-content-x4Limit on data transfer rate“Shannon’s theorem” redirects here. Shannon’s name is also associated with the sampling theorem. (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4In information theory, the noisy-channel coding theorem (sometimes Shannon’s theorem or Shannon’s limit), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded the modern discipline of information theory. (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4Table of ContentsOverview[edit]Mathematical statement[edit]Outline of proof[edit]Achievability for discrete memoryless channels[edit]Weak converse for discrete memoryless channels[edit]Strong converse for discrete memoryless channels[edit]Channel coding theorem for non-stationary memoryless channels[edit]Outline of the proof[edit]See also[edit]References[edit]External links[edit]Overview[edit]Stated by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. Shannon’s theorem has wide-ranging applications in both communications and data storage. This theorem is of foundational importance to the modern field of information theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is due to Amiel Feinstein[1] in 1954.The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if R{displaystyle {xrightarrow[{text{Message}}]{W}}{begin{array}{|c| }hline {text{Encoder}}\\f_{n}\\hline end{array}}{xrightarrow[{mathrm {Encoded atop sequence} }]{X^{n}}}{begin{array}{|c| }hline {text{Channel}}\\p(y|x)\\hline end{array}}{xrightarrow[{mathrm {Received atop sequence} }]{Y^{n}}}{begin{array}{|c| }hline {text{Decoder}}\\g_{n}\\hline end{array}}{xrightarrow[{mathrm {Estimated atop message} }]{hat {W}}}}A message W is transmitted through a noisy channel by using encoding and decoding functions. An encoder maps W into a pre-defined sequence of channel symbols of length n. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel \u2013the received sequence\u2013 is fed into a decoder which maps the sequence into an estimate of the message. In this setting, the probability of error is defined as:Pe=Pr{W^\u2260W}.{displaystyle P_{e}={text{Pr}}left{{hat {W}}neq Wright}.}Theorem (Shannon, 1948):1. For every discrete memoryless channel, the channel capacity, defined in terms of the mutual information I(X;Y){displaystyle I(X;Y)} as\u00a0C=suppXI(X;Y){displaystyle C=sup _{p_{X}}I(X;Y)}[3]has the following property. For any "},{"@context":"http:\/\/schema.org\/","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki6\/#breadcrumbitem","name":"Enzyklop\u00e4die"}},{"@type":"ListItem","position":2,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki6\/noisy-channel-coding-theorem-wikipedia\/#breadcrumbitem","name":"Noisy-channel coding theorem – Wikipedia"}}]}]