10 bits per bin and 32 bins looks to me like 320 bits.I am looking for some kind of spare bin compression/encoding technique for histogram data.
Here is the problem:
Say for example I have 1024 input values which have to be categorized in to 32 bins. Each input value has a range of 0 to 255. If the input value has a range [0, 15] bin 1 is incremented by 1. If the input value has a range [16, 32], bin 2 incremented by 1..and so on..and bin 32 in incremented by 1 if the input value has a range [240, 255]. At the end if I add up all the bin counts the total adds up to 1024 (as expected). But then the worst case count for a single bin is 1024 since all the input values might be in the same range.
If I am implementing this in hardware I need 10 bits for each bin and 32 bins ==> 32768 bits. This just grows if instead of 2^10 (i.e. 1024) input values
I have 2^20 input values.
So here is my question. Considering that most of the bins might be sparse bins (and the sum of all bins adds up to 1024), is there any way of representing 32 bins in a larger single bin (or a couple of bins) with some overhead for encoding/compression.
Pls let me know if something is not clear.