Added new process-pool runner based on AMPoule (integrated into Evennia).

This allows e.g. utils.utils.run_async to offload long-running functions
to a completely different subprocess entirely, offering real parallelism.

Implementation is still experimental, notably not all objects can be
transferred safely across the wire; also there is no concept of
updating caches yet - so adding an object from the subprocess side
will not be known in the main thread yet (since caches cannot yet tell
the underlying database has changed).
This commit is contained in:
Griatch 2012-09-02 10:10:22 +02:00
parent dcc7f29a91
commit f5a889e40c
22 changed files with 2322 additions and 60 deletions

View file

@ -0,0 +1,24 @@
IDMAPPER
--------
https://github.com/dcramer/django-idmapper
IDmapper (actually Django-idmapper) implements a custom Django model
that is cached between database writes/read (SharedMemoryModel). It
not only lowers memory consumption but most importantly allows for
semi-persistance of properties on database model instances (something
not guaranteed for normal Django models).
Evennia makes a few modifications to the original IDmapper routines
(we try to limit our modifications in order to make it easy to update
it from upstream down the line).
- We change the caching from a WeakValueDictionary to a normal
dictionary. This is done because we use the models as semi-
persistent storage while the server was running. In some situations
the models would run out of scope and the WeakValueDictionary
then allowed them to be garbage collected. With this change they
are guaranteed to remain (which is good for persistence but
potentially bad for memory consumption).
- We add some caching/reset hooks called from the server side.

View file

@ -128,6 +128,11 @@ class SharedMemoryModel(Model):
cls.__instance_cache__ = {} #WeakValueDictionary()
flush_instance_cache = classmethod(flush_instance_cache)
def save(cls, *args, **kwargs):
"overload spot for saving"
super(SharedMemoryModel, cls).save(*args, **kwargs)
# Use a signal so we make sure to catch cascades.
def flush_cache(**kwargs):
for model in SharedMemoryModel.__subclasses__():