Got it

Deep Generative Model and Lossless Compression:

Latest reply: Mar 13, 2022 16:34:12 622 23 11 0 0

Basic Information:

Shannon's source coding theorem:

The source coding theorem states that (in the limit, as the length of a stream of independently and identically-distributed random variable (i.i.d.) data approaches infinity), it is impossible to compress the data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source without losing information. However, with a minimal likelihood of loss, the code rate can be arbitrarily near to the Shannon entropy.

Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is the concept behind data compression.

In information theory, the source coding theorem informally states that,

N i.i.d. random variables each with entropy H(X) can be compressed into more than

N H(X) bits with negligible risk of information loss, as

N → ∞;

 but conversely, if they are compressed into fewer than N H(X) bits it is virtually certain that information will be lost

Generative Model:

To begin we must understand what Is a generative model, to train a generative model, we first collect a vast amount of data in a domain (for example, millions of photos, phrases, or sounds) and then train a model to generate data that looks like it.

Deep Generative Model and Lossless Compression

The basis of AI data compression, according to Shannon's source coding theorem, is to build an explicit generative model to describe the likelihood, and then create a coder to adjust the explicit generative model.

Currently, explicit generative models are classified into three types:

·         Autoregressive models

·         (Variational) auto-encoder models

·         Flow models

all of the above can be created with specialized data reduction methods.

Autoregressive Data Model:

Instead, autoregressive models train a network that models the conditional distribution of each individual pixel based on the conditional distribution of prior pixels (to the left and to the top). This is similar to plugging the image's pixels into a char-rnn, but instead of a 1D sequence of characters, the RNNs run horizontally and vertically over the image.

aut


au

the probability distribution of each symbol in an autoregressive model can be determined from the symbols in the previous dimension, the current symbol can be compressed or decompressed using entropy coding.

 

Auto-encoder models:

 An encoder and a decoder are included in the auto-encoder model. The compression procedure saves hidden variables as well as the difference between the decoder and the actual data. Using the hidden variables and faults recorded in the decoder, the decoder can reconstruct the original data.

VAEs allow us to formalize this problem in the context of probabilistic graphical models, where the goal is to maximize a lower constraint on the data's log probability.

an

jalak

Flow models:

The bijection of the input data and the implicit variable is constructed using a neural network in the flow model. The compression method converts the input data into latent, which are subsequently compressed using the previous distribution. The process of decoding is reversed. The following diagram summarizes the three types of explicit generative models and compression algorithms.

A flow model is made up of several flow layers, each of which creates an invertible transition from input to output. By stacking the flow lay, the complex transformation between the input data and the hidden variables can be realized.


flo

 

Final thoughts:

All of these approaches have advantages and disadvantages. For example, in advanced probabilistic graphical models with latent variables, Variational Autoencoders allow us to accomplish both learning and efficient Bayesian inference. Their created samples, on the other hand, are a little hazy. GANs now provide the sharpest images, but because to their unstable training dynamics, they are more difficult to optimize. They are, however, inefficient during sampling and do not simply give simple low-dimensional picture codes. All of these models are currently being researched.


  • x
  • convention:

little_fish
Admin Created Feb 23, 2022 13:23:31

Very good dear.
View more
  • x
  • convention:

MahMush
MahMush Created Feb 24, 2022 05:12:20 (0) (0)
thanks for your appreciation  
wissal
MVE Created Feb 23, 2022 13:34:17

Interesting information
View more
  • x
  • convention:

MahMush
MahMush Created Feb 24, 2022 05:15:51 (0) (0)
thankyou  
MahMush
MahMush Created Feb 24, 2022 05:18:11 (0) (0)
thankyou ..hope you get something informative  
Saqib123
Moderator Created Feb 23, 2022 17:50:08

Thanks for sharing
View more
  • x
  • convention:

MahMush
MahMush Created Feb 28, 2022 04:59:50 (0) (0)
thanks for feedback  
MahMush
Moderator Author Created Feb 24, 2022 05:10:00

thanks to all

View more
  • x
  • convention:

Unicef
MVE Created Feb 24, 2022 05:48:21

GOOD THANKS
View more
  • x
  • convention:

MahMush
MahMush Created Feb 28, 2022 05:00:01 (0) (0)
welcome  
Kevin_Thomas
Created Feb 27, 2022 06:46:12

Nice post, MahMush. Keep up the good work!
View more
  • x
  • convention:

MahMush
MahMush Created Feb 28, 2022 05:00:38 (0) (0)
thanks for feedback  
user_4326135
Created Feb 27, 2022 07:38:16

Important content
View more
  • x
  • convention:

MahMush
MahMush Created Feb 28, 2022 05:01:06 (0) (0)
hope you get informative data from it  
Anno7
Moderator Author Created Mar 1, 2022 04:59:07

very well defined
View more
  • x
  • convention:

MahMush
MahMush Created Mar 1, 2022 05:00:58 (0) (0)
thankyou for your support...please share your thoughts on it  
faysalji
Moderator Author Created Mar 2, 2022 12:50:09

Well written. Thanks for sharing knowledge..
View more
  • x
  • convention:

MahMush
MahMush Created Mar 14, 2022 05:37:50 (0) (0)
glad to know it  
12
Back to list

Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.
Information Protection Guide
Thanks for using Huawei Enterprise Support Community! We will help you learn how we collect, use, store and share your personal information and the rights you have in accordance with Privacy Policy and User Agreement.