Before this commit, there where two different locks when updating the restricted
data cache. A future lock, what is faster but only works in the same thread. The
other lock is in redis, it is not so fast, but also works in many threads.
The future lock was buggy, because on a second call of update_restricted_data
the same future was reused. So on the second run, the future was already done.
I don't see any way to delete. The last client would have to delete it, but there
is no way to find out which client the last one is.
* Activate restricted_data_cache on inmemory cache
* Use ElementCache in rest-api get requests
* Get requests on the restapi return 404 when the user has no permission
* Added async function for has_perm and in_some_groups
* changed Cachable.get_restricted_data to be an ansync function
* rewrote required_user_system
* changed default implementation of access_permission.check_permission to
check a given permission or check if anonymous is enabled
* delete only keys with prefix
* Make redis_provider atomic with transactions and lua scripts
* improve lock
* generate change_id in redis to make sure it is uniq
* use miliseconds as starttime
* add argument use max_change_id to get_{full|resticted}_data
* geis does not work with channels2 and never will be (it has to be python now)
* pytest
* rewrote cache system
* use username instead of pk for admin user in tests
* change get_restricted_data and get_projector_data to always use a list
* Add typings to all get_restricted_data and get_projector_data methods
* Replace CollectionElementList with a real list
* Fixed arguments of inform_deleted_data
* Moved CollectionElementCache to cache.py and refactored it
* Run tests with cache enabled (using fakeredis)
* Add caching support to users/group
* Add a function has_perm that works with the cache.
* Removed our session backend so other session backends (without the database) can be used