Information Theory: Channel Coding, Channel Capacity.

Everything you’ve come to expect.

Mindless, terror-filled panic rushes through me as I open the slides for my exam and can barely understand anything. About a week ago, I was pretty good at most of the things related to the subject, the subject. Yes, Information theory and coding techniques. It was supposed to be my subject and here I was sitting in probably one of the most picturesque cafe that I’ve come across, trembling with fear of failing. 
So, I decided to go through all of it again, here. 
What is channel coding? That’s the big question isn’t it, though we don’t really have time to go philosophical and think about what life really means. Channel coding takes the compressed code, and adds bunch of 1s and 0s in a systematic pattern to make the code more reliable. Interesting thing about channel coding is that it isn’t trying to protect your code from the attackers. The intention here is completely different. Whist adding this redundancy, the channel coding techniques make sure that even if certain bits get damaged or changed during the transmission over a channel, you are able to receive the complete message without communicating back with the transmitter. 
Thus, this process needs to be deterministic. It can also be paraphrased as “reliable communication over an unreliable noisy channel”
You have a message of length k and thus when you are transmitting you add n-k bits to make the whole code word of length n. 
Definition: code rate = k/n
Here we have already decided to transmit data over a particular channel. You would like to know how different kind of channels would end up affecting your message. This can be done by measuring channel capacity. 
Channel Capacity would essentially tell you the maximum amount of traffic or signal that can move over a particularly infrastructure channel.  

Channel Capacity
Noiseless ChannelLog_2(m)
Lossless ChannelLog_2(m)
Useless Channel0
Deterministic ChannelLog_2(n)
Binary Symmetric1+plog_2(p)+(1-p)log_2(1-p)

We had defined entropy and information content last time as Information content  I(x_i) = - log_{b}P(x_i) and entropy  H(I(x_i)) = -\sum_{i=1}^{m} P(x_i) log_{2}P(x_i) 
So, let’s suppose we have a Discreet Memoryless channel with entropy H(x) and channel capacity c. The symbols are transmitted every T_s and channel is used every T_cthen according to channel coding theorem
\frac{H(x)}{T_s}\le\frac{C}{T_c} for transmission to be possible. i.e Good code exists    Now there is also information capacity theorem :

  • The theorem establishes Shannon’s channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference,

I = B log_2(1+\frac{s}{n})

Too much about channels, let’s move to the codes that we are all here for. A block code is a code where all the codes have same length. The message is split into blocks of k length and as seen before, we make it code of length n by adding n-k elements of redundancy. 
So, let’s say we created a coding scheme that takes in two code words a and b. If a+b is in the coding scheme then it means that the code is linear. 
This is the point I must warn you that it might get a little confusing because I was really confused for hours. I might have to admit that I wasn’t looking closely enough. 
We currently know what the channel capacity is, we know we want to use a (n, k) block code. How do we go about it now? There are two kinds of coding, systematic and non systematic. In systematic code, we can visually see where the message bits are and where the parity bits are. Non Systematic is… well not that systematic. I like to look at it this way, systematic is just sticking a sticky note at the end of message for verification, it’s a suffix or prefix where as non systematic is amalgamation. Non-systematic codes completely change the message in a structured way. 
Let’s go for a (7, 4) systematic one. How did I get that number m = 8 then (2^m-1, 2^m-1-m, 3)
Your parity bits are going to 2^n power. i.e 1, 2, 4.  and rest are data bits 3, 5, 6, 7.
So for generating parity matrix you simply ex-or them  so like yourm1 = 3+5+6 = 1 1 1 0m2 = 3+5+7 = 1 1 0 1m3 = 3+6+7 = 1 0 1 1
so P = [m1 m2 m3]which looks something like \begin{bmatrix}  1 & 1 &1 \\ 1&1&0\\1&0&1\\0&1&1    \end{bmatrix}
and your code word will be = Data . [I_4 | P]
C = D . \begin{bmatrix}1&0&0&0&|& 1&1&1\\0&1&0&0&|& 1&1&0\\0&0&1&0&|& 1&0&1\\0&0&0&1&|& 0&1&1 \end{bmatrix}

[I_k | P] this is called a generator matrix. G
and H(x) is parity check matrix which is basically [p^T|I_m]

https://megskindofnotes.wordpress.com/2019/09/24/bch-code-bose-chaudhuri-hocquenghem-codes/

One thought on “Information Theory: Channel Coding, Channel Capacity.

Leave a Reply

%d bloggers like this: