mirror of
https://github.com/evennia/evennia.git
synced 2026-03-30 20:47:17 +02:00
Added new process-pool runner based on AMPoule (integrated into Evennia).
This allows e.g. utils.utils.run_async to offload long-running functions to a completely different subprocess entirely, offering real parallelism. Implementation is still experimental, notably not all objects can be transferred safely across the wire; also there is no concept of updating caches yet - so adding an object from the subprocess side will not be known in the main thread yet (since caches cannot yet tell the underlying database has changed).
This commit is contained in:
parent
dcc7f29a91
commit
f5a889e40c
22 changed files with 2322 additions and 60 deletions
24
src/utils/idmapper/EVENNIA.txt
Normal file
24
src/utils/idmapper/EVENNIA.txt
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
|
||||
IDMAPPER
|
||||
--------
|
||||
|
||||
https://github.com/dcramer/django-idmapper
|
||||
|
||||
IDmapper (actually Django-idmapper) implements a custom Django model
|
||||
that is cached between database writes/read (SharedMemoryModel). It
|
||||
not only lowers memory consumption but most importantly allows for
|
||||
semi-persistance of properties on database model instances (something
|
||||
not guaranteed for normal Django models).
|
||||
|
||||
Evennia makes a few modifications to the original IDmapper routines
|
||||
(we try to limit our modifications in order to make it easy to update
|
||||
it from upstream down the line).
|
||||
|
||||
- We change the caching from a WeakValueDictionary to a normal
|
||||
dictionary. This is done because we use the models as semi-
|
||||
persistent storage while the server was running. In some situations
|
||||
the models would run out of scope and the WeakValueDictionary
|
||||
then allowed them to be garbage collected. With this change they
|
||||
are guaranteed to remain (which is good for persistence but
|
||||
potentially bad for memory consumption).
|
||||
- We add some caching/reset hooks called from the server side.
|
||||
|
|
@ -128,6 +128,11 @@ class SharedMemoryModel(Model):
|
|||
cls.__instance_cache__ = {} #WeakValueDictionary()
|
||||
flush_instance_cache = classmethod(flush_instance_cache)
|
||||
|
||||
def save(cls, *args, **kwargs):
|
||||
"overload spot for saving"
|
||||
super(SharedMemoryModel, cls).save(*args, **kwargs)
|
||||
|
||||
|
||||
# Use a signal so we make sure to catch cascades.
|
||||
def flush_cache(**kwargs):
|
||||
for model in SharedMemoryModel.__subclasses__():
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue