any chance that any of the data would contain 'often used' words?
for example: If it was storing text that seemed to have lots of DayOf MonthOf , YearOf info, you could parse the data so those words are stored as a single byte. If you used bytes > 127 as 'compressed' words, you could compress about 127 words.
Tuesday, November 1, 2005 = 25 bytes
[128], [129] 1, [130] = 8 bytes
The above example assumes that all of the received text would have an ascii value less than 128 so that you could use the high half of a byte for compressed characters.
peudoCode:
if ascii of character < 128 then character is normal
if ascii of character > 127 then
table_position = character - 127 ' example: 128 - 127 = 1
lookup table_position to find uncompessed word
endif
does this make sense?
Bookmarks