Latency Optimisation under Variable Load with active measurements
- 1 General
- 2 Implementation Outline
- 2.1 Adaptive shaper
- 2.2 Qdics
- 3 Northbound API
- 3.1 OVSDB schema
- 3.1.1 IP_interface
- 3.1.2 Interface_QoS
- 3.1.3 Interface_Queue
- 3.1.4 Linux_Queue
- 3.1.5 Interface_Classifier
- 3.1.6 AdaptiveQoS
- 3.2 Adaptive shaper configuration parameters
- 3.1 OVSDB schema
- 4 Southbound API
- 5 Requirements
General
The scope of this design is to outline a high-level approach for extended OpenSync QoS support, including some planned implementation aspects.
Initially, the feature targets xDSL devices and FWA scenarios, emphasizing the use of the Cake SQM shaper with the dynamic shaper option, while allowing the alternative use of HTB+FQ_CoDel. Although this first stage is intentionally limited to specific platforms and scenarios for business and practical reasons, the new and extended OpenSync QoS APIs and implementation aim to be as generic as possible to enable future development of additional QoS features and broader platform support.
The extent of generalization may be limited due to the platform-specific nature of QoS. The scope also includes reviewing existing partial OpenSync QoS support and plans for extending it to enable new QoS-related capabilities.
Implementation Outline
FWA connections (e.g., 4G, 5G, Starlink) often have highly variable bandwidth. Traditional traffic shapers like HTB or cake require static upload/download bandwidth settings, which leads to a trade-off: setting the value too low underutilizes available bandwidth; setting it too high risks bufferbloat and increased latency when link speed drops.
To address this, the idea is to dynamically adjust the shaper bandwidth based on current traffic load and measured latency.
Adaptive shaper
OpenWrt’s cake-autorate project provides a working solution for dynamic shaping, and we have integrated it into OpenSync in a simulated FWA environment using variable uplink conditions (simulated via netem). The implementation showed promising behavior in adjusting bandwidth based on latency feedback.
The aim is to introduce a flexible OpenSync API to support adaptive shaping, beginning with cake-based dynamic shaping. While the design strives to be generic, the implementation is currently specific to cake, given it is the most mature and widely adopted option for Smart Queue Management (SQM). This approach may not be applicable on platforms that do not support Linux traffic control or lack kernel support for cake (e.g., Broadcom, older kernels).
Qdics
Qdiscs are one of the key mechanisms used for traffic shaping in Linux-based platforms.
The OpenSync OVSDB API via the existing IP_Interface/Interface_QoS/Interface_Queue allows configuring to some extent predefined HTB+FQ_CoDel qdisc-based rate-limiting implementation. In this setup, an HTB qdisc is attached to the root of a network interface, and individual HTB classes are created for each Interface_Queue definition, where the class rate is set based on the Interface_Queue::bandwidth value. Each of these classes then has an FQ_CoDel qdisc attached. A default HTB class is also created with a 3.5 Gbps rate limit to handle all other traffic that is not explicitly rate-limited. HTB class IDs are assigned by OpenSync starting with 1:1 for the Interface_Queue entry with the lowest priority value, incrementing the minor number for each subsequent class (1:2, 1:3, etc.). The QoS manager populates the Openflow_Tag table using the format <tag>class, where the tag string is appended with "class", and associates this with the assigned queue ID (HTB class ID).
Linux qdisc capable platform:
- <bandwidth> SET, <bandwidth_ceil> NOT SET: --> HTB class rate:=<bandwidth> (rate-limit only, no borrowing)
- <bandwidth> SET, <bandwidth_ceil> SET: --> HTB class rate:=<bandwidth>, ceil:=<bandwidth_ceil> (rate-limit plus borrowing)All the other possibly more complex qdisc configurations (or using other qdisc types) can be configured via the newly introduced Linux_Queue OVSDB table.
The current OpenSync implementation for traffic control is platform-dependent (for Broadcom platforms, please contact the relevant party).
Northbound API
Controller interaction involves multiple OVSDB tables, including the existing IP_Interface, Interface_QoS (both existing and extended), Interface_Queue (existing and extended), the new Linux_Queue table, the new optional AdaptiveQoS table, Interface_Classifier (existing and extended), as well as Openflow_Tag, Netfilter, and others.
Depending on the platform controller may need to configure Netfilter rows with CLASSIFY rules to classify into classids using openflow tag "<tag>class". Or it may need to configure Netfilter rules to classify based on SKB marks, using openflow tags.
OVSDB schema
To support the QoS schema, API, and implementation extensions, the scope of this document also includes a detailed overview of the existing OpenSync QoS-related schema definitions, implementation aspects, and controller-to-OpenSync interactions.
IP_interface
Name | Type | Description |
|---|---|---|
qos | Reference to [optional] | 0 or 1 reference to A reference to |
ingress_classifier | array of references to
| 0 or more references to Each reference to |
egress_classifier | array of references to
| 0 or more references to Each reference to |
Interface_QoS
Name | Type | Description |
|---|---|---|
queues | array of references to | 0 or more references to At most 1 of |
lnx_queues [NEW FIELD] | array of references to | 0 or more references to At most 1 of |
other_config | key/value |
|
adaptive_qos | key/value | Optional Adaptive QoS (per UL/DL) configuration. Note: Any adaptive QoS configuration is always applied on top of some base QoS configuration. Defined keys:
Either none or exactly 2 interfaces can be configured with The precondition for Adaptive QoS is some base QoS defined ( See Adaptive Shaper chapter for more details. At first stage, the “active” latency measurement method is assumed (and supported), no need to define latency_estimation key. |
status | enum |
|
Interface_Queue
Name | Type | Description |
|---|---|---|
priority | integer | For Linux qdisc implementation For BCM archer implementation: The archer queues have implicity priorities based on qid. BCM qid=0 has the lowest priority, BCM qid=30 the highest priority. OpenSync sorts |
bandwidth | integer | Maximum rate this class and all its children are guaranteed. Mandatory. kbit/s |
bandwidth_ceil [NEW FIELD] | integer [optional] | Maximum rate at which a class can send, if its parent has bandwidth to spare. Defaults to the configured rate, which implies no borrowing. OPTIONAL. kbit/s |
tag | string
| Openflow tag name. To be used with dynamic OpenSync implementation assigns queue ids (mapped either to Linux qdisc class ids or archer queue ids) in incremental order starting with 0 and in a way that queues with the same tag get assigned the same queue id. OpenSync also generates an SKB mark value, by its own logic or using the underlying platforms APIs (BCM). QoS manager will populate |
other_config | key:value | TODO: document the “shared” option. |
mark | integer [optional] [STATUS field, not config] | Status field. SKB mark generated by OpenSync (or underlying platform). May be used by the controller to later configure appropriate Netfilter rules for packet classification. Used on platforms where SKB marks are used for traffic classification (e.g. BCM). However, note: The current controller logic is to simply use Openflow tag for this. See |
Linux_Queue
Introduce a new OVSDB table Linux_Queue to configure traffic control (shaping and scheduling) in the Linux kernel. This applies to platforms that support Linux traffic control and allow integration with hardware acceleration engines or platform-specific features. The table supports configuration of Linux queue disciplines, including both native qdiscs and platform-specific custom qdiscs (e.g., NSS HTB, NSS FQ_CoDel), enabling hierarchical qdisc setups.
Note: Filters (tc-filter) that may be needed by some configurations (for instance when IFB interfaces are involved that don’t support Netiflter) combined with qdisc definitions are already supported by OpenSync (Interface_Classifier), although not in a fully generic way, however the existing API/implementation could be fixed/extended.
Name | Type | Description |
|---|---|---|
type | enum |
|
name | string |
qdisc name (well, type). Alternative option: Make this an enum. However, this field being a string provides more direct flexibility for the future as we can directly map to any Linux qdisc supported on a platform. |
parent_id | string
| Parent qdisc or class ID or special value “root”.
|
id | string |
If not specified (or if zero, which equals to unspecified) qdisc IDs could be automatically assigned by the operating system. However, in most cases it makes sense to explicitly assign an ID though, so that hierarchical definitions are possible and/or filters could be assigned to a qdisc/class with the specified ID by the controller ( It would probably make sense for the OpenSync implementation to enforce nonzero qdisc ID so that well-defined configuration is more enforced. |
params | string [optional] | qdisc-specific parameters |
status | enum [optional] [STATUS field, not config] |
|
Linux_Queue rows could be configured in any order and OpenSync implementation to resolve/infer the appropriate order of tc-qdisc/tc-class commands based on id and parent_id fields.
Interface_Classifier
name | Type | Description |
|---|---|---|
priority | integer | Filtering rule priority. |
match | string | Match definition. In tc-filter format. |
action | string | Action definition. In tc-filter format. |
parent_id [NEW FIELD] | string | Parent qdisc or class ID (or special value “root”) for this filter to attach to.
Note: Not currently used. parent_id: |
token | string |
|
status | enum |
|
AdaptiveQoS
NEW OVSDB table.
Allows configuring of additional/custom global (i.e. not per interface as those are defined via
Interface_QoS->adaptive_qos field) adaptive shaping configuration parameters, such as custom reflector list and other custom and fine-tuning adaptive shaping configuration parameters.
name | Type | Description |
|---|---|---|
reflectors_list | string | Custom reflectors list. IPv4 or IPv6 addresses separated with whitespace. |
rand_reflectors | boolean | Enable or disable randomization of reflectors at startup |
num_pingers | integer | Number of pingers to maintain |
ping_interval | integer | Interval time for ping, milliseconds |
active_thresh_kbps | integer | Threshold in Kbit/s below which DL/UL is considered idle |
latency_measure_type | enum |
|
other_config | key/value | Reserved for possible future use. |
AdaptiveQoS OVSDB table is optional and does not by itself configure Adaptive QoS, as that is enabled via Interface_QoS->adaptive_qos key/map configuration per a chosen UL and a chosen DL interface. AdaptiveQoS OVSDB table is only used to optionally configure custom global (not per UL/DL interface) Adaptive QoS configuration parameters, such as when a CSP would want to use their own custom ping reflector list.
Adaptive shaper configuration parameters
Adaptive QoS is optional and can be configured on top of existing base QoS with cake configured via IP_Interface/Interface_QoS/Linux_Queue for an UL and for a DL interface.
Adaptive QoS per UL and per DL interface parameters are configured via Interface_QoS->adaptive_qos field which is key/value map. See Interface_QoS OVSDB table section for the description of the defined configuration parameters. Adaptive QoS is thus enabled when Interface_QoS->adaptive_qos has config defined for an UL and for a DL interface.
Optionally, the AdaptiveQoS table then allows configuring custom global (not per interface) Adaptive QoS configuration parameters.
Southbound API
Existing
osn_qos_*extended:osn_qos.h: https://github.com/plume-design/opensync/blob/osync_7.0.0/src/lib/osn/inc/osn_qos.hNew
osn_qdisc_*:osn_qdisc.h: https://github.com/plume-design/opensync/blob/osync_7.0.0/src/lib/osn/inc/osn_qdisc.hNew
osn_adaptive_qos_*:osn_adaptive_qos.h: https://github.com/plume-design/opensync/blob/osync_7.0.0/src/lib/osn/inc/osn_adaptive_qos.h
Requirements
Acceleration is not supported at this stage.
Acceleration must be turned off (
skipaccelrule pushed by the controller).Adaptive QoS requires
cakescheduler support:sch_cakewas upstreamed from kernel 4.19 onwards.