It's a nice little library in itself, providing a good little battle effect that's quite tried and trusted among more professional development circles for RPGs and some strategies. The issue Divine Traveller seemed to note is that as part of it's processing, it creates a new icon, coloured according to your requirement. This he believed was a computationally expensive process, and not being cached, contributed to making F_Damage computationally expensive in kind. While I can't say for sure, it appears he isn't alone in this view.
Taking their experience in these matters at face value, I set about lessening the computational strain of this otherwise pleasant effect, that I would encourage others to consider as a possible means of visual feedback to players. At Divine Traveller and Lummox JR's recommendation, some caching of this created icon was in order. At this point I would like you to spend a few minutes having a little think about this issue. How would you cache this custom coloured icon, so it can be re-used in subsequent calls when appropriate.
My starting point was to assess the user, and their likely requirements, from a common sense stand-point. The users of F_Damage are likely to be RPG / Action type developers. F_Damage does provide the option to customise the colour of the damage icons shown with any old colour you like.
However, these developers are probably only going to display a few different colours, to represent a limited set of concepts. Too many different colours, and their users will get confused (magenta numbers ... what does that signify again??). Now I don't know what the colours will be, but I know there will probably only be a few different ones.
The other main observation is that F_Damage will probably be called with fair frequency. It won't be insane like hundreds of calls per second, but damage in those sorts of games is a core mechanic.
This translates into some potentially useful properties for our caching. We will have a fairly limited range of values to cache which will be hit fairly frequently.
Our aims for the cache are thus:
- Optimal or sub-optimal performance, which is brought about by high cache hit success rates. - Cache misses cost time to build the icon again.
- No changes to the F_Damage interface, minimal intrusion on other people's environment. - It's a library, already currently in use. Fiddling with the interface or intruding on their environment means people are less likely to move over to using the new version.
Now onto the solution.
The cache itself is fairly simple. I suspect a number of you will have chosen something like ... an associative list? Makes sense, mapping a colour string to the appropriately coloured icon. This scales very well to large numbers of cached entries, as it happens. I didn't do this.
Some of the more observant of you will have noticed this list needs pruning. If you just keep adding entries, you have an effective memory leak in the case where outliers exist. Outliers would be the odd call with a quirky colour, possibly randomly generated, that isn't used often.
Perhaps to solve this you kept a record of when the entry was last used, maybe in a datum with the entry itself, and spawn()ed a persistent loop that sleep()s for a long while, and checks the cache periodically for unused entries? I didn't do this.
Here's why:
- The associative list just wasn't needed. It's not a bad idea, but it's not essential either. We know that the number of entries we probably need to store is small, so we don't have to use the associative list's "rapid lookup" properties to be quick. It's preferable to use them, but if we have some other more pressing design issue, we don't need to.
- The persistent loop cleanup is well suited to large caches that need infrequent pruning, but incurs a bit of a penalty by virtue of being up to the soft scheduler to decide when it runs, along with everything else the soft scheduler has to do with a game. We have a small cache, and frequent (but not too frequent) pruning would keep it small.
And here's the solution I chose:
Cache
var/list/entries = new()
get(var/key)
var/result = null
for(var/CacheEntry/C in entries)
if (C.key == key)
result = C.hit()
else
C.miss()
return result
put(var/key, var/value)
var/CacheEntry/C = new(key, value, 10)
entries.Add(C)
CacheEntry
var
key
value
timer
New(var/key, var/value, var/timer = 10)
src.key = key
src.value = value
src.timer = timer
proc
hit()
timer = 10
return value
miss()
timer--
if (timer == 0)
del(src)
The pruning is done by virtue of F_Damage being called frequently, as cache misses decrease the entry's timer until it hits 0, then the entry deletes itself and the cache icon is cleaned up.
The "rapid lookup" property of an associative list wasn't used, to allow the entry to clean itself up without having to know of the existence of the parent Cache object. The performance difference on a small cache is negligable, but allows good encapsulation of the CacheEntry, and Cache simply takes care of insertion and cache lookup.
The value of 10 for the timer is examplary, but happens to work quite well for F_Damage. It acts as an informal limit on the cache size, you can't really store more items than that number, as entries will expire. If a hit is not met in 10 calls, then it's not used frequently enough, and evicted. In discussion with Divine Traveller, we decided ~5 entries are likely to be commonly used, and they would probably survive this timer process okay. If eviction proved too aggressive, you'd simply increase this number.
Some will note "performance issues" with this design, such as the need to inspect all entries to find a match and maintain the timer mechanism, every call. To which I'd fully agree, but note they don't matter because the expected use scenario mitigates them and by design this issue cannot grow out of hand. The goals dictate the alternative (a persistent loop) was not desirable.
It is goals that should drive your design at all times. Not just a matter of "what am I doing", but "why am I doing it" and "what is important". A design that doesn't obey a small and clear set of goals at all times is prone to failure.
People usually don't like restricting their goals to a small number. We all want our code to be flexible as hell, super efficient in both performance and memory use and elegant. However in trying all of these, people suffer "frameworkism", where-by their code fails to meet it's original objective, because by design it was meant to meet not just a few goals, but all goals, and by consequence in implementation meets none of the goals.
The ability to discern key goals before starting a design is the programmer's greatest aid to clarity and effective software, in my opinion. Just thought I'd share why.