Iterative Algorithms for Lossy Source Coding PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Iterative Algorithms for Lossy Source Coding PDF full book. Access full book title Iterative Algorithms for Lossy Source Coding by Venkat Bala Chandar. Download full books in PDF and EPUB format.

Iterative Algorithms for Lossy Source Coding

Iterative Algorithms for Lossy Source Coding PDF Author: Venkat Bala Chandar
Publisher:
ISBN:
Category :
Languages : en
Pages : 68

Book Description
This thesis explores the problems of lossy source coding and information embedding. For lossy source coding, we analyze low density parity check (LDPC) codes and low density generator matrix (LDGM) codes for quantization under a Hamming distortion. We prove that LDPC codes can achieve the rate-distortion function. We also show that the variable node degree of any LDGM code must become unbounded for these codes to come arbitrarily close to the rate-distortion bound. For information embedding, we introduce the double-erasure information embedding channel model. We develop capacity-achieving codes for the double-erasure channel model. Furthermore, we show that our codes can be efficiently encoded and decoded using belief propagation techniques. We also discuss a generalization of the double-erasure model which shows that the double-erasure model is closely related to other models considered in the literature.

Iterative Algorithms for Lossy Source Coding

Iterative Algorithms for Lossy Source Coding PDF Author: Venkat Bala Chandar
Publisher:
ISBN:
Category :
Languages : en
Pages : 68

Book Description
This thesis explores the problems of lossy source coding and information embedding. For lossy source coding, we analyze low density parity check (LDPC) codes and low density generator matrix (LDGM) codes for quantization under a Hamming distortion. We prove that LDPC codes can achieve the rate-distortion function. We also show that the variable node degree of any LDGM code must become unbounded for these codes to come arbitrarily close to the rate-distortion bound. For information embedding, we introduce the double-erasure information embedding channel model. We develop capacity-achieving codes for the double-erasure channel model. Furthermore, we show that our codes can be efficiently encoded and decoded using belief propagation techniques. We also discuss a generalization of the double-erasure model which shows that the double-erasure model is closely related to other models considered in the literature.

New Iterative Inference Algorithms for Source Coding Based on Markov Random Fields

New Iterative Inference Algorithms for Source Coding Based on Markov Random Fields PDF Author: Jose Manuel Fernandez
Publisher:
ISBN:
Category : Markov random fields
Languages : en
Pages : 180

Book Description
The global deployment of wireless communication systems poses significant challenges for system designers which have to accommodate an ever-increasing number of users while simultaneously meet demands for increased levels of security and privacy. Many of these problems involve aspects of lossy source coding that are yet to be well understood. For instance, the nature and combined effectiveness of sparse graphs and message-passing algorithms in source coding continues to be the subject of debate and active research. This is in stark contrast to the channel coding case where specific capacity-approaching codes (i.e. Turbo Codes, Low-Density Parity Check Codes, etc.) and classical message-passing schemes (i.e. Belief Propagation) are clearly understood, widely accepted, and increasingly in use. Furthermore, the emergence of cavity methods drawn from statistical physics (i.e. Survey Propagation) gave rise to the widespread assumption that the source coding problem could not be solved by simple Belief Propagation-based iterations over Markov Random Fields. This notion is challenged heretofore by the introduction of two novel message-passing algorithms. These two simple schemes, namely Truthiness Propagation and Modified Truthiness Propagation, are developed based upon modified Bethe free energy approximations (equivalent to log-partition function approximations) and shown to be closely related to Belief Propagation, thus situating them on firm theoretical ground. The new algorithms exhibit rate-distortion performance near the Shannon limit even for modest codeword lengths when combined with both regular and irregular Low-Density Generator Matrix Codes. This feature offers a distinct advantage not seen with other message-passing schemes. Furthermore, their complexity is manageable since the decimation steps prevalent in other recently proposed techniques are not required. Finally, these modified instantiations of Belief Propagation are applied to a number of applications relevant to the codeword quantization problem (i.e. general decoding problem) via simple examples in dirty paper coding, data hiding, secrecy coding, and wireless sensor networks.

Distributed Source Coding

Distributed Source Coding PDF Author: Shuang Wang
Publisher: John Wiley & Sons
ISBN: 1118705971
Category : Science
Languages : en
Pages : 384

Book Description
Distributed source coding is one of the key enablers for efficient cooperative communication. The potential applications range from wireless sensor networks, ad-hoc networks, and surveillance networks, to robust low-complexity video coding, stereo/Multiview video coding, HDTV, hyper-spectral and multispectral imaging, and biometrics. The book is divided into three sections: theory, algorithms, and applications. Part one covers the background of information theory with an emphasis on DSC; part two discusses designs of algorithmic solutions for DSC problems, covering the three most important DSC problems: Slepian-Wolf, Wyner-Ziv, and MT source coding; and part three is dedicated to a variety of potential DSC applications. Key features: Clear explanation of distributed source coding theory and algorithms including both lossless and lossy designs. Rich applications of distributed source coding, which covers multimedia communication and data security applications. Self-contained content for beginners from basic information theory to practical code implementation. The book provides fundamental knowledge for engineers and computer scientists to access the topic of distributed source coding. It is also suitable for senior undergraduate and first year graduate students in electrical engineering; computer engineering; signal processing; image/video processing; and information theory and communications.

Iterative Algorithms for Achieving Near-ML Decoding Performance in Concatenated Coding Systems

Iterative Algorithms for Achieving Near-ML Decoding Performance in Concatenated Coding Systems PDF Author: Dan Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Book Description


Interactive Source Coding for Function Computation in Networks

Interactive Source Coding for Function Computation in Networks PDF Author: Nan Ma
Publisher:
ISBN:
Category :
Languages : en
Pages : 320

Book Description
Abstract: Today coding-blocklength, rate, signal-to-noise ratio, frequency, and network-size, are well recognized and studied resources for communication and computation in information theory. A relatively less recognized and understood resource is interaction, that is, the number of rounds of two-way information exchange. This thesis undertakes a comprehensive mathematical analysis of the benefits of interaction for distributed computation in source networks using the information-theoretic framework of distributed block source coding. The ultimate limits of computation efficiency for two-terminal and certain types of collocated networks are characterized in terms of the statistical dependencies between the information sources and the structure of the desired functions. A blueprint for designing a new family of distributed source codes for general networks is developed where interaction protocols can harness the structure of the functions, the network topology, and the statistical dependencies to maximize the computation-efficiency. A novel viewpoint is introduced in which the minimum sum-rate is viewed as a functional of the joint source distribution and distortion levels. Certain convex-geometric properties of this functional are established and used to develop a new type of blocklength-free single-letter characterization of the ultimate limit of interactive computation involving potentially an infinite number of infinitesimal-rate messages. The traditional method for single-letter characterization is shown to be inadequate. The new characterization is used to derive closed-form analytic expressions of the infinite-message minimum sum-rates for specific examples and an efficient iterative algorithm for numerically evaluating any finite-message minimum sum-rate for the general case. It is also used to construct the first examples which demonstrate that for lossy source reproduction, two messages can strictly improve the one-message Wyner-Ziv rate-distortion function settling an open question from a 1985 paper. For computing symmetric functions of binary sources in collocated networks, a new lower bound of the minimum sum-rate is derived and shown to be order-wise tight while the cut-set bound is shown to be order-wise loose. Striking examples are constructed to highlight the benefits of interaction: in a two-terminal network a single backward message can lead to an arbitrarily large gain in the sum-rate and in a star network interaction can change the sum-rate by an order-of-magnitude.

A First Course in Information Theory

A First Course in Information Theory PDF Author: Raymond W. Yeung
Publisher: Springer Science & Business Media
ISBN: 1441986081
Category : Technology & Engineering
Languages : en
Pages : 426

Book Description
This book provides an up-to-date introduction to information theory. In addition to the classical topics discussed, it provides the first comprehensive treatment of the theory of I-Measure, network coding theory, Shannon and non-Shannon type information inequalities, and a relation between entropy and group theory. ITIP, a software package for proving information inequalities, is also included. With a large number of examples, illustrations, and original problems, this book is excellent as a textbook or reference book for a senior or graduate level course on the subject, as well as a reference for researchers in related fields.

Rate Distortion Theory for Causal Video Coding

Rate Distortion Theory for Causal Video Coding PDF Author: Lin Zheng
Publisher:
ISBN:
Category :
Languages : en
Pages : 156

Book Description
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xi, i=1,2 ..., N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xi, i =1, 2 ..., N, enlist help only from all previous encoded frames Sj, j=1, 2 ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xi can use all previous original frames Xj, j=1, 2 ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D1 ..., D_N) (R*p(D1 ..., D_N), respectively) denote the minimum total rate required to achieve a given distortion level D1 ..., D_N> 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D1 ..., D_N) required to achieve a given distortion (quality) level D1 ..., D_N> 0. Specifically, we first show that for jointly stationary and ergodic sources X1 ..., X_N, R*c(D1 ..., D_N) is equal to the infimum of the n-th order total rate distortion function R_{c, n}(D1 ..., D_N) over all n, where R_{c, n}(D1 ..., D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c, n}(D1 ..., D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D1 ..., D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)--under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D1 ..., D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D1, D2, D3) for an IID vector source (X1, X2, X3) where X1 and X2 are independent, and then demonstrate the existence of such X1, X2, X3 for which R*p(D1, D2, D3)>R*c(D1, D2, D3) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).

Low Complexity Iterative Algorithms in Channel Coding and Compressed Sensing

Low Complexity Iterative Algorithms in Channel Coding and Compressed Sensing PDF Author: Ludovic Danjean
Publisher:
ISBN:
Category :
Languages : en
Pages : 153

Book Description
Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are notably used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes known to have exceptional error-rate performance under iterative decoding. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This work primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm for compressed sensing. In the first part of this dissertation, we focus on the decoding algorithms for LDPC codes. It is now well known that LDPC codes suffer from an error floor phenomenon in spite of their exceptional performance. This phenomenon originates from the failures of traditional iterative decoders, like belief propagation (BP), on certain low-noise configurations. Recently, a novel class of decoders, called finite alphabet iterative decoders (FAIDs), were proposed with the capability of surpassing BP in the error floor region at a much lower complexity. We show that numerous FAIDs can be designed, and among them only a few will have the ability of surpassing traditional decoders in the error floor region. In this work, we focus on the problem of the selection of good FAIDs for column-weight-three codes over the binary symmetric channel. Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, but the designed decoders do not guarantee good performance for finite-length codes especially in the error floor region. Instead we propose a methodology to identify FAIDs with good error-rate performance in the error floor. This methodology relies on the knowledge of potentially harmful topologies that could be present in a code. The selection method uses the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on a wide range of codes. Moreover first results on column-weight-four codes demonstrate the potential of such decoders on codes which are more used in practice, for example in storage systems. In the second part of this dissertation, we address the area of iterative reconstruction algorithms for compressed sensing. This field has attracted a lot of attention since Donoho's seminal work due to the promise of sampling a sparse signal with less samples than the Nyquist theorem would suggest. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the optimal reconstruction methods which notably use linear programming. In this work, we modify and analyze a low complexity reconstruction algorithm that we refer to as the interval-passing algorithm (IPA) which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex l1-minimization reconstruction and the very simple verification decoding. The measurement process has also a lower complexity as we use sparse measurement matrices. Comparison with another type of message-passing algorithm, called approximate message-passing, show the IPA can have superior performance with lower complexity. We also demonstrate that the IPA can have practical applications especially in spectroscopy.

Iterative Algorithms for Achieving Near-ML Decoding Performance in Concatenated Coding Systems

Iterative Algorithms for Achieving Near-ML Decoding Performance in Concatenated Coding Systems PDF Author: Dan Zhang
Publisher:
ISBN:
Category :
Languages : en
Pages : 235

Book Description


Special Issue on Codes and Graphs and Iterative Algorithms

Special Issue on Codes and Graphs and Iterative Algorithms PDF Author: Brendan J. Frey
Publisher:
ISBN:
Category : Ciphers
Languages : en
Pages : 361

Book Description