A conceptional issue in `get_data_since` leads to incomplete
autoupdates. The behaviour was long time in the code, but only with a
lot of autoupdates (high concurrency) and the autoupdate delay I noticed
the bug during testing. I'm sure, that this issue might have caused
incomplete autoupdates (which the user may experience as "lost
autoupdates") in previous productive instances. Instead of quering a
range (from_change_id to to_change_id) one now can only get data from a
change id up to the max change id in the element cache. The max change
id gets now returned by `get_data_since`.
I also added a get_all_data with the capability of returning the
max_change_id at this point of time.
As a usability-"fix" (more like a fix the result of a bug, not the bug
itself) a refresh button for a poll was added, that issues an autoupdate
for the poll and all options.
- Extracted autoupdate code from consumers
- collect autoupdates until a AUTOUPDATE_DELAY is reached (since the first autoupdate)
- Added the AUTOUPDATE_DELAY parameter in the settings.py
- moved some autoupdate code to utils/autoupdate
- moved core/websocket to utils/websocket_client_messages
- add the autoupdate in the response (there are some todos left)
- do not send autoupdates on error (4xx, 5xx)
- the client blindly injects the autoupdate in the response
- removed the unused autoupdate on/off feature
- the clients sends now the maxChangeId (instead of maxChangeId+1) on connection
- the server accepts this.
If autoupdates are too fast, the first one may not be fully executed. Especially when the maxChangeId is not yet updated, the second Autoupdate will trigger a refresh, because for the client it "lay in the future". This can be prevented by synchronizing the autoupdate-handling code with a mutex.
Since channels_redis does not support dedicated read-redis instances, the
autoupdate message may be received before the data was replicated. All workers
read the autoupdate message from the write host, so there is a race between
getting this message and a finished replication. For large payloads, the
replication is slower in the most cases (even more in a distributed setup, where
the master and replica are on different nodes). The easy way is to wait for
replication. But there is one difficulty: The number of replicas has to be
known. There is a new settings-variable "AMOUNT_REPLICAS" which defaults to 1.
It needs to be set correctly! If it is too high, every autoupdate will be
delayed by 1 seconds because of a timeout witing for non-existent replicas. If
it is too low, some autoupdates may be wrong (and not detectable by the client!)
becuase of reading from non-synchronised relicas.
The other possibility is to fork channel_redis and add the feature of a
read-only redis. This ould help, because on a single redis instance all commands
are ordered: First, the data is synced, then the autoupdate message. Attention:
This means, if redis-replicas are scaled up, one must make sure to read from the
same instance. I think this is not possible in the way how dockers overlay
networks work. The only way would be to open one connection and reuse the
connection from channels_redis in OpenSlides. This would mean a heavy
integration of channels_redis (meaning including the source code in our repo).
For the first fix, this one is easy and should work.
- Added settings.py docs
- Fixed left-overs from #4920
- Reworked all server messages to a new argument formet, so that the
client can translate server messages