Tag memory consumption and SetSize #602
Replies: 4 comments
-
|
.....and now I see there's a |
Beta Was this translation helpful? Give feedback.
-
|
It does appear that setting |
Beta Was this translation helpful? Give feedback.
-
|
Hi @tomwarner13 , sorry for the delay.
Great, happy to hear that!
Yes and no, but nitpicking here: the "special entries" for each tag are added not when saving an entry that has tags attached, but when reading that entry later. So, simple example: you save an entry with 10 tags, but then never access it again? No special entries saved in the cache. Also, this is true for L1 (memory level), but if you also use and L2 (distributed level) then there it's different: the special entries will only be created in the L2 if you do a
Yes, it's probably an oversimplification but yes, that's the case. And as you already discovered in the next comment, the entry options for those special entries are specified either:
But watch out: the special entries for the tags are strictly needed for Tagging to work. If you remove them or let them disappear before the right moment, you may have zombie entries (theoretically removed by a For a better understanding of how Tagging work in FusionCache and why I took certain design decision, I would suggest a read here or here.
Kind of, yes (plus the additional bits of info mentioned above).
Ahaha yes I believe you, but 10-12 tags per entry is generally sustainable. In general the advice with Tagging is "do not overtag", that's it. (also: it's very similar to the main advice you can find with observability/OTEL and metrics, so it's kind of an industry standard)
No, in theory it should work all the times no matter what, let me explain. Here's how it works:
Makes sense? Hope this helps, let me know. |
Beta Was this translation helpful? Give feedback.
-
|
Also, I'd like to add a concrete example, just to be more clear. Let's say you specify, via Then you save an entry with a cache key Then you do a Now, if you try to get the entry But if you instead wait for Hope this helps. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We're migrating a huge application which was previously using a hand-rolled L2 cache system that got put together in haste some time in the mid-2010s, to FusionCache. Overall it's been a good experience (I especially appreciate the comprehensize docs!), but there is something we're running into that I want to understand.
To preserve some of the legacy compatibility, we are currently using a LOT of tags. Like, potentially 10-12 tags per every "bucket" of a few dozen entries, to grotesquely oversimplify. We noticed that the application using FusionCache on production servers will consistently chew through all its available memory and start thrashing to disk if not restarted regularly. We attempted to set up a SizeLimit on the MemoryCache object that FusionCache is using, and set a default size of 1 on every FusionCache entry, but this had no effect.
I've been investigating more and I think the issue is our tagging implementation. We do ultimately plan to refactor out most of the tagging logic and consolidate down to far fewer tags, but that is a longer-term effort.
Meanwhile, what I have been seeing is that every time we set a tag, FusionCache adds this as a new entry to the MemoryCache with a Size of 0 (even when the default size for other entries is >0), a long absolute expiration (like 2 weeks in the future), and a priority of NeverRemove. If I understand what I'm seeing correctly, this means that memcache is continuing to fill its capacity up with tag items as the application runs, and even when we attempt to compact items out due to memory pressure, none of these tags will ever be evicted and eventually we have to restart the server and start over.
So I have a few questions around this:
Beta Was this translation helpful? Give feedback.
All reactions