Integrate live streaming inside the jitsi/rtc components.
Live streaming works without jitsi, but is using the same components for
a fluid integration.
A streaming URL can be set in the settings page.
Users EITHER consume the live stream OR are presend in a jitsi live
conference.
To consume both the live stream and the jitsi conference, users may use
a dedicated jitsi tab in their session.
The jitsi users can be restricted to only allow thouse with the right
the manage speakers or being present on the "current list of speakers",
automatically simulating a virtual plenum
A conceptional issue in `get_data_since` leads to incomplete
autoupdates. The behaviour was long time in the code, but only with a
lot of autoupdates (high concurrency) and the autoupdate delay I noticed
the bug during testing. I'm sure, that this issue might have caused
incomplete autoupdates (which the user may experience as "lost
autoupdates") in previous productive instances. Instead of quering a
range (from_change_id to to_change_id) one now can only get data from a
change id up to the max change id in the element cache. The max change
id gets now returned by `get_data_since`.
I also added a get_all_data with the capability of returning the
max_change_id at this point of time.
As a usability-"fix" (more like a fix the result of a bug, not the bug
itself) a refresh button for a poll was added, that issues an autoupdate
for the poll and all options.
- Extracted autoupdate code from consumers
- collect autoupdates until a AUTOUPDATE_DELAY is reached (since the first autoupdate)
- Added the AUTOUPDATE_DELAY parameter in the settings.py
- moved some autoupdate code to utils/autoupdate
- moved core/websocket to utils/websocket_client_messages
- add the autoupdate in the response (there are some todos left)
- do not send autoupdates on error (4xx, 5xx)
- the client blindly injects the autoupdate in the response
- removed the unused autoupdate on/off feature
- the clients sends now the maxChangeId (instead of maxChangeId+1) on connection
- the server accepts this.
If autoupdates are too fast, the first one may not be fully executed. Especially when the maxChangeId is not yet updated, the second Autoupdate will trigger a refresh, because for the client it "lay in the future". This can be prevented by synchronizing the autoupdate-handling code with a mutex.
- minimal jitsi client in the bottom right of the screen
- can be shown and hidden like a messenger
- allows to mute, unmute, call, stop call
- automatically connects to a conference
- shows a list of users connected to the room
- jitsi iframe is currently hidden
- "open in jitsi meet" link
- only one connection will be opened if using multiple tabs
- JITSI_DOMAIN and JITSI_ROOM_NAME must be present in the settings.py
- config variables in settings
Implements vote weight in client
The user detail page has a new property
change deserialize to parse floats
change "yes"-voting to send "Y" and "0" instead of "1" and "0"
add vote weight to user list, filter, sort
add vote weight to single voting result
votesvalid and votescast respect the individual vote weight
fix parse-poll pipe and null in pdf
Since channels_redis does not support dedicated read-redis instances, the
autoupdate message may be received before the data was replicated. All workers
read the autoupdate message from the write host, so there is a race between
getting this message and a finished replication. For large payloads, the
replication is slower in the most cases (even more in a distributed setup, where
the master and replica are on different nodes). The easy way is to wait for
replication. But there is one difficulty: The number of replicas has to be
known. There is a new settings-variable "AMOUNT_REPLICAS" which defaults to 1.
It needs to be set correctly! If it is too high, every autoupdate will be
delayed by 1 seconds because of a timeout witing for non-existent replicas. If
it is too low, some autoupdates may be wrong (and not detectable by the client!)
becuase of reading from non-synchronised relicas.
The other possibility is to fork channel_redis and add the feature of a
read-only redis. This ould help, because on a single redis instance all commands
are ordered: First, the data is synced, then the autoupdate message. Attention:
This means, if redis-replicas are scaled up, one must make sure to read from the
same instance. I think this is not possible in the way how dockers overlay
networks work. The only way would be to open one connection and reuse the
connection from channels_redis in OpenSlides. This would mean a heavy
integration of channels_redis (meaning including the source code in our repo).
For the first fix, this one is easy and should work.