Wear levelling your data area?

In an effort to put off having to look too carefully at anything involving amps, volts and ohms (and putting off any decision about user input) I looked at the Arduino specs about memory and looked to ideas about storing the program/bank change and potentially SYSEX.

The Mega 2560 has 256K flash for your program code (minus the “bootloader” code) and 8K SRAM for things like variables. Finally there is 4K of EEPROM which can be read from AND written to easily by your program code. This EEPROM is the perfect place for stored but changeable data. It is also rated at 100,000 write cycles.

This sounds like a lot, but in the interest of thinking about how to reduce the wear and tear, and to think of a format for storing the data, I looked at this.

Using some VERY rough calculations, based on a certain number of songs in a “set list”, a certain number of characters in a name, a certain number of program changes and SYSEX messages I came up with a guesstimate (assuming my maths was right) of maybe 320 to 330 bytes for a “set list”, on the basis that a pub/event band might typically have two “sets” for an evening. That comes to around 12 sets able to be stored within 4K.

My idea for a data format assumes the very first “write” to the EEPROM writes zeros to every byte. Then when writing a set list (lets say at the start of memory) it skips over the first byte. If you want to “delete” a set list then the idea is to write some non-zero value into that byte, but leaving everything “intact” such that it can be “skipped over”.

Performance would rely on there only being 4K ever to scan through, and in general this idea will work until you’ve written through the whole memory (minus some remnant at the end), but what then?

Well this is where the plan still needs to be… finessed… Maybe “defrag” if there isn’t enough space at “the end”, shuffling all non-deleted data to the start? Maybe having the deleted flag byte be a sort of age counter thus allowing the use of the use of “gaps” left by deleted data? Then you’d have to be careful where the counter wraps, and if something is “smaller” then you need a means of marking out where the “next” block of data is rather than being directly following.

Of course there is the thought maybe fixing a hard limit on the size of each set list data… say 512 bytes… and carving the 4K space up in those chunks, keeping a “deleted” flag…

This is complete and total overkill really as an even spread might mean between say 800,000 and 1,200,000 set list “saves” to memory?

B.

Leave a Reply

Your email address will not be published. Required fields are marked *