Commit Graph

28 Commits

Author SHA1 Message Date
Emanuel Schütze
7665634d42
Merge pull request #5375 from FinnStutzenstein/autoupdatePerformance
Autoupdate performance
2020-05-29 17:31:32 +02:00
FinnStutzenstein
0eee839736
Small improvements and first attempt to make to poll progress responsive
to massive autoupdates. The "optimization" didn't help, so this has to
be continued in another PR.
2020-05-29 15:46:19 +02:00
FinnStutzenstein
600b9c148b
Inserting changed and deleted elements into redis in batches (fixes #5386) 2020-05-28 14:00:57 +02:00
FinnStutzenstein
bf88cea200
Rewrite projector code to be cache friendly
This speeds up the requests/seconds by a factor of 100
2020-05-22 15:23:54 +02:00
FinnStutzenstein
23842fd496
Synchronize autoupdate code in the client
If autoupdates are too fast, the first one may not be fully executed. Especially when the maxChangeId is not yet updated, the second Autoupdate will trigger a refresh, because for the client it "lay in the future". This can be prevented by synchronizing the autoupdate-handling code with a mutex.
2020-05-22 15:23:53 +02:00
FinnStutzenstein
b78372f8a3
Load configs before models 2020-04-27 09:41:23 +02:00
FinnStutzenstein
bb2f958eb5 Redis: Wait for replication on writes
Since channels_redis does not support dedicated read-redis instances, the
autoupdate message may be received before the data was replicated. All workers
read the autoupdate message from the write host, so there is a race between
getting this message and a finished replication. For large payloads, the
replication is slower in the most cases (even more in a distributed setup, where
the master and replica are on different nodes). The easy way is to wait for
replication. But there is one difficulty: The number of replicas has to be
known. There is a new settings-variable "AMOUNT_REPLICAS" which defaults to 1.
It needs to be set correctly! If it is too high, every autoupdate will be
delayed by 1 seconds because of a timeout witing for non-existent replicas. If
it is too low, some autoupdates may be wrong (and not detectable by the client!)
becuase of reading from non-synchronised relicas.

The other possibility is to fork channel_redis and add the feature of a
read-only redis. This ould help, because on a single redis instance all commands
are ordered: First, the data is synced, then the autoupdate message. Attention:
This means, if redis-replicas are scaled up, one must make sure to read from the
same instance. I think this is not possible in the way how dockers overlay
networks work. The only way would be to open one connection and reuse the
connection from channels_redis in OpenSlides. This would mean a heavy
integration of channels_redis (meaning including the source code in our repo).

For the first fix, this one is easy and should work.
2020-04-01 13:09:48 +02:00
FinnStutzenstein
cc4ca61964 Adding a second optional redis for read only accesses 2019-12-03 12:30:31 +01:00
FinnStutzenstein
f7cdfb7c02 Locking service and locks the history build process (fixes #4039) 2019-12-03 12:14:49 +01:00
FinnStutzenstein
5baae14156 Skip autoupdates on foreign personal notes 2019-09-02 13:57:12 +02:00
FinnStutzenstein
2aa0275dca Logging prefix and handling redis connection errors 2019-09-02 08:09:28 +02:00
FinnStutzenstein
d4dc13706f Ensures change id across multiple workers 2019-08-19 09:42:51 +02:00
FinnStutzenstein
1d718dcb74 Fixed two little issues with relations and reverse mapping
- Reverse setup for normal autoupdates (no initial loading)
- reverse "set null" to be reflected to the mapping

Also fixed a bug with redis
2019-08-15 12:51:59 +02:00
FinnStutzenstein
daabbaff28 Added missing ResetCache-handling 2019-08-12 15:01:57 +02:00
FinnStutzenstein
5aef823807 Major cache rewrite:
- Removed the restricted data cache (it wasn't used since OS 3.0)
- unify functions for restricted and full data: Just one function, which
  accteps an optional user_id: If it is None, full data is returned, and
  with a user id given, the restricted data
- More atomic access to redis, especially for:
- Check for data-existance in redis and do an auto-ensure-cache.
- Speedup through hashing of scripts and redis' script cache.
- Save schema version into the redis cache and rebuild, if the version
  changed

Client changes:
- Simplified the ConstantsService
- Fixed bug, when receiving an autoupdate with all_data=True from the
  Server
2019-08-08 08:35:02 +02:00
Oskar Hahn
206eb9bcba decode only the needed data when calculating the required users 2019-03-29 22:38:12 +01:00
Oskar Hahn
b329115007 use f-string syntax for strings 2019-01-18 17:37:36 +01:00
Oskar Hahn
eddbd86d3a Run black 2019-01-08 21:51:52 +01:00
Oskar Hahn
67d933a206 fix douple elements 2018-11-18 07:57:44 +01:00
Oskar Hahn
eead4efe6a Remove CollectionElement
* Use user_id: int instead of Optional[CollectionElment] in utils
* Rewrote autoupdate system without CollectionElement
2018-11-04 01:06:01 +01:00
Oskar Hahn
93dfd9ef67
Merge pull request #3973 from ostcar/test_with_redis
add possebility to run tests with redis
2018-11-03 20:54:55 +01:00
Oskar Hahn
cd34d30866 Remove utils.collections.Collection class and other cleanups
* Activate restricted_data_cache on inmemory cache
* Use ElementCache in rest-api get requests
* Get requests on the restapi return 404 when the user has no permission
* Added async function for has_perm and in_some_groups
* changed Cachable.get_restricted_data to be an ansync function
* rewrote required_user_system
* changed default implementation of access_permission.check_permission to
  check a given permission or check if anonymous is enabled
2018-11-03 20:48:19 +01:00
Oskar Hahn
d11c7bbad7 add possebility to run tests with redis 2018-11-03 16:59:21 +01:00
Oskar Hahn
c405b4b323 Use Protocol instead of ABC in cache_provicer 2018-10-28 10:37:16 +01:00
Oskar Hahn
bc442210fb Improve redis cache
* delete only keys with prefix
* Make redis_provider atomic with transactions and lua scripts
* improve lock
* generate change_id in redis to make sure it is uniq
* use miliseconds as starttime
* add argument use max_change_id to get_{full|resticted}_data
2018-10-15 23:37:26 +02:00
Oskar Hahn
9af6bf1606 ensures test on startup 2018-09-23 16:57:49 +02:00
Oskar Hahn
aac9dcabf5 drop python 3.5 2018-08-23 17:51:30 +02:00
Oskar Hahn
10b3bb6497 Update to channels 2
* geis does not work with channels2 and never will be (it has to be python now)
* pytest
* rewrote cache system
* use username instead of pk for admin user in tests
2018-08-22 06:30:11 +02:00