For information theory, I've always thought of entropy as follows:
"If you had a really smart compression algorithm, how many bits would it take to accurately represent this file?"
i.e., Highly repetitive inputs compress well because they don't have much entropy per bit. Modern compression algorithms are good enough on most data to be used as a reasonable approximation for the true entropy.
"If you had a really smart compression algorithm, how many bits would it take to accurately represent this file?"
i.e., Highly repetitive inputs compress well because they don't have much entropy per bit. Modern compression algorithms are good enough on most data to be used as a reasonable approximation for the true entropy.