I had an idea for file compression. Basically you treat the file as a number, then try to apply math operations to the number until you get a compact form.
For instance: 1042322766539016630700866888770291483756
Becomes: 23^13*5^82
For this example it looks great, but of course I hand selected the number, and most numbers don't have a nice short list of factors. And of course it all breaks down if the number you are trying to compress is a prime.
So this is all well and good, but my friend pointed out this proof to me. Which summarizes nicely why for a file of any length, most compression algorithms will make the file larger. The compressions algorithms we use for most computer files exploit the regularity of the data. Text and music are highly redundant in terms of information. but if you have something generating random bytes of data, most of it will not be compressible.
On further searching I found this, which is a generalization of my basic idea. At least the idea was good enough to be disproved, and I learned a lot. :)