Latency Optimisation under Variable Load with active measurements

Latency Optimisation under Variable Load with active measurements

General

The scope of this design is to outline a high-level approach for extended OpenSync QoS support, including some planned implementation aspects.

Initially, the feature targets xDSL devices and FWA scenarios, emphasizing the use of the Cake SQM shaper with the dynamic shaper option, while allowing the alternative use of HTB+FQ_CoDel. Although this first stage is intentionally limited to specific platforms and scenarios for business and practical reasons, the new and extended OpenSync QoS APIs and implementation aim to be as generic as possible to enable future development of additional QoS features and broader platform support.

The extent of generalization may be limited due to the platform-specific nature of QoS. The scope also includes reviewing existing partial OpenSync QoS support and plans for extending it to enable new QoS-related capabilities.

Implementation Outline

FWA connections (e.g., 4G, 5G, Starlink) often have highly variable bandwidth. Traditional traffic shapers like HTB or cake require static upload/download bandwidth settings, which leads to a trade-off: setting the value too low underutilizes available bandwidth; setting it too high risks bufferbloat and increased latency when link speed drops.

To address this, the idea is to dynamically adjust the shaper bandwidth based on current traffic load and measured latency.

Adaptive shaper

OpenWrt’s cake-autorate project provides a working solution for dynamic shaping, and we have integrated it into OpenSync in a simulated FWA environment using variable uplink conditions (simulated via netem). The implementation showed promising behavior in adjusting bandwidth based on latency feedback.

The aim is to introduce a flexible OpenSync API to support adaptive shaping, beginning with cake-based dynamic shaping. While the design strives to be generic, the implementation is currently specific to cake, given it is the most mature and widely adopted option for Smart Queue Management (SQM). This approach may not be applicable on platforms that do not support Linux traffic control or lack kernel support for cake (e.g., Broadcom, older kernels).

Qdics

Qdiscs are one of the key mechanisms used for traffic shaping in Linux-based platforms.

The OpenSync OVSDB API via the existing IP_Interface/Interface_QoS/Interface_Queue allows configuring to some extent predefined HTB+FQ_CoDel qdisc-based rate-limiting implementation. In this setup, an HTB qdisc is attached to the root of a network interface, and individual HTB classes are created for each Interface_Queue definition, where the class rate is set based on the Interface_Queue::bandwidth value. Each of these classes then has an FQ_CoDel qdisc attached. A default HTB class is also created with a 3.5 Gbps rate limit to handle all other traffic that is not explicitly rate-limited. HTB class IDs are assigned by OpenSync starting with 1:1 for the Interface_Queue entry with the lowest priority value, incrementing the minor number for each subsequent class (1:2, 1:3, etc.). The QoS manager populates the Openflow_Tag table using the format <tag>class, where the tag string is appended with "class", and associates this with the assigned queue ID (HTB class ID).

Linux qdisc capable platform: - <bandwidth> SET, <bandwidth_ceil> NOT SET: --> HTB class rate:=<bandwidth> (rate-limit only, no borrowing) - <bandwidth> SET, <bandwidth_ceil> SET: --> HTB class rate:=<bandwidth>, ceil:=<bandwidth_ceil> (rate-limit plus borrowing)

All the other possibly more complex qdisc configurations (or using other qdisc types) can be configured via the newly introduced Linux_Queue OVSDB table.

The current OpenSync implementation for traffic control is platform-dependent (for Broadcom platforms, please contact the relevant party).

Northbound API

Controller interaction involves multiple OVSDB tables, including the existing IP_Interface, Interface_QoS (both existing and extended), Interface_Queue (existing and extended), the new Linux_Queue table, the new optional AdaptiveQoS table, Interface_Classifier (existing and extended), as well as Openflow_Tag, Netfilter, and others.

Depending on the platform controller may need to configure Netfilter rows with CLASSIFY rules to classify into classids using openflow tag "<tag>class". Or it may need to configure Netfilter rules to classify based on SKB marks, using openflow tags.

OVSDB schema

To support the QoS schema, API, and implementation extensions, the scope of this document also includes a detailed overview of the existing OpenSync QoS-related schema definitions, implementation aspects, and controller-to-OpenSync interactions.

IP_interface

Name

Type

Description

Name

Type

Description

qos

Reference toInterface_QoS

[optional]

0 or 1 reference to Interface_QoS.

A reference to Interface_QoS set here means a QoS definition is attached to the interface.

ingress_classifier

array of references to

Interface_Classifier

0 or more references to Interface_Classifier.

Each reference to Interface_Classifier set here means one ingress traffic control filtering definition (tc-filter) defined for this interface.

egress_classifier

array of references to Interface_Classifier

 

0 or more references to Interface_Classifier.

Each reference to Interface_Classifier set here means one egress traffic control filtering definition (tc-filter) defined for this interface.

Interface_QoS

Name

Type

Description

Name

Type

Description

queues

array of references to Interface_Queue

0 or more references to Interface_Queue.

At most 1 of queues or lnx_queues is allowed to be set.

lnx_queues

[NEW FIELD]

array of references to Linux_Queue

0 or more references to Linux_Queue.

At most 1 of queues or lnx_queues fields is allowed to be configured.

other_config

key/value

 

adaptive_qos

key/value

Optional Adaptive QoS (per UL/DL) configuration.

Note: Any adaptive QoS configuration is always applied on top of some base QoS configuration.

Defined keys:

  • direction : {"UL", "DL"}

  • min_rate:Minimum bandwidth (Kbit/s).

  • base_rate: Steady state bandwidth (Kbit/s)

  • max_rate: Maximum bandwidth (Kbit/s).

Either none or exactly 2 interfaces can be configured with Interface_QoS->adaptive_qos defined; one for UL, one for DL.

The precondition for Adaptive QoS is some base QoS defined (Linux_Queue, i.e. Interface_QoS>lnx_queues defined for both those interfaces, and currently this base QoS must be cake as OpenSync reference implementation uses cake-autorate script.

See Adaptive Shaper chapter for more details.

At first stage, the “active” latency measurement method is assumed (and supported), no need to define latency_estimation key.

status

enum

{ "success", "error"}

Interface_Queue

Name

Type

Description

Name

Type

Description

priority

integer

For Linux qdisc implementation priority is mapped to HTB prio. Lower value means higher priority.

For BCM archer implementation: The archer queues have implicity priorities based on qid. BCM qid=0 has the lowest priority, BCM qid=30 the highest priority. OpenSync sorts Interface_Queue rows by priority field and then BCM qids are assigned in that order; a row with lowest priority value configures qid=0 (and thus a queue with the lowest priority) and so on.

bandwidth

integer

Maximum rate this class and all its children are guaranteed. Mandatory.

kbit/s

bandwidth_ceil

[NEW FIELD]

integer

[optional]

Maximum rate at which a class can send, if its parent has bandwidth to spare. Defaults to the configured rate, which implies no borrowing.

OPTIONAL.

kbit/s

tag

string

 

Openflow tag name.

To be used with dynamic Netfilter rules for packet classification, either using CLASSIFY (qdisc class ids) or MARK (SKB marks).

OpenSync implementation assigns queue ids (mapped either to Linux qdisc class ids or archer queue ids) in incremental order starting with 0 and in a way that queues with the same tag get assigned the same queue id.

OpenSync also generates an SKB mark value, by its own logic or using the underlying platforms APIs (BCM).

QoS manager will populate Openflow_Tag -w name=="<tag>" device_value with the generated SKB mark value for this queue and/or Openflow_Tag -w name=="<tag>class" (note the “class” string appended) with the assigned queue id (qdisc class id).

other_config

key:value

TODO: document the “shared” option.

mark

integer

[optional]

[STATUS field, not config]

Status field. SKB mark generated by OpenSync (or underlying platform).

May be used by the controller to later configure appropriate Netfilter rules for packet classification. Used on platforms where SKB marks are used for traffic classification (e.g. BCM). However, note: The current controller logic is to simply use Openflow tag for this. See Interface_Queue->tag.

Linux_Queue

Introduce a new OVSDB table Linux_Queue to configure traffic control (shaping and scheduling) in the Linux kernel. This applies to platforms that support Linux traffic control and allow integration with hardware acceleration engines or platform-specific features. The table supports configuration of Linux queue disciplines, including both native qdiscs and platform-specific custom qdiscs (e.g., NSS HTB, NSS FQ_CoDel), enabling hierarchical qdisc setups.

Note: Filters (tc-filter) that may be needed by some configurations (for instance when IFB interfaces are involved that don’t support Netiflter) combined with qdisc definitions are already supported by OpenSync (Interface_Classifier), although not in a fully generic way, however the existing API/implementation could be fixed/extended.

Name

Type

Description

Name

Type

Description

type

enum

{ "qdisc", "class" }

  • qdisc if this is a qdisc definition, either classless or classful.

  • class if this is a class definition for a classfull qdisc (defined in another row).

 

name

string

{ "htb", "fq_codel", "cake", "nsshtb", "nss_fq_codel", ...}

qdisc name (well, type).

Alternative option: Make this an enum. However, this field being a string provides more direct flexibility for the future as we can directly map to any Linux qdisc supported on a platform.

parent_id

string

 

Parent qdisc or class ID or special value “root”.

  • ID is in format "major:minor". (hexadecimal numbers, max 16 bit, without the 0x prefix).

id

string

  • if type==qdisc then this is a "handle" in the format "major:" which specifies the qdisc id.

  • if type==class then this is a "class id" in the format "major:minor". Classes residing under a qdisc share their qdisc major number, but each have a separate minor number (the actual "class id").

If not specified (or if zero, which equals to unspecified) qdisc IDs could be automatically assigned by the operating system.

However, in most cases it makes sense to explicitly assign an ID though, so that hierarchical definitions are possible and/or filters could be assigned to a qdisc/class with the specified ID by the controller (Interface_Classifier) and/or Netfilter CLASSIFY rules could be configured by the controller (Netfilter) where the controller needs to know the exact class id to classify into.

It would probably make sense for the OpenSync implementation to enforce nonzero qdisc ID so that well-defined configuration is more enforced.

params

string

[optional]

qdisc-specific parameters

status

enum

[optional]

[STATUS field, not config]

{ "success", "error" }

  • “success” if the qdisc or class definition successfuly configured, “error” otherwise.

Linux_Queue rows could be configured in any order and OpenSync implementation to resolve/infer the appropriate order of tc-qdisc/tc-class commands based on id and parent_id fields.

Interface_Classifier

name

Type

Description

name

Type

Description

priority

integer

Filtering rule priority.

match

string

Match definition. In tc-filter format.

action

string

Action definition. In tc-filter format.

parent_id

[NEW FIELD]

string

Parent qdisc or class ID (or special value “root”) for this filter to attach to.

  • ID is in format "major:minor". (hexadecimal numbers, max 16 bit, without the 0x prefix).

Note: Not currently used. parent_id:1: is assumed

token

string

 

status

enum

{"success", "error"}

AdaptiveQoS

  • NEW OVSDB table.

  • Allows configuring of additional/custom global (i.e. not per interface as those are defined via Interface_QoS->adaptive_qos field) adaptive shaping configuration parameters, such as custom reflector list and other custom and fine-tuning adaptive shaping configuration parameters.

name

Type

Description

name

Type

Description

reflectors_list

string

Custom reflectors list. IPv4 or IPv6 addresses separated with whitespace.

rand_reflectors

boolean

Enable or disable randomization of reflectors at startup

num_pingers

integer

Number of pingers to maintain

ping_interval

integer

Interval time for ping, milliseconds

active_thresh_kbps

integer

Threshold in Kbit/s below which DL/UL is considered idle

latency_measure_type

enum

{"active", "passive"}. Only “active” is currently supported.

other_config

key/value

Reserved for possible future use.

AdaptiveQoS OVSDB table is optional and does not by itself configure Adaptive QoS, as that is enabled via Interface_QoS->adaptive_qos key/map configuration per a chosen UL and a chosen DL interface. AdaptiveQoS OVSDB table is only used to optionally configure custom global (not per UL/DL interface) Adaptive QoS configuration parameters, such as when a CSP would want to use their own custom ping reflector list.

Adaptive shaper configuration parameters

Adaptive QoS is optional and can be configured on top of existing base QoS with cake configured via IP_Interface/Interface_QoS/Linux_Queue for an UL and for a DL interface.

Adaptive QoS per UL and per DL interface parameters are configured via Interface_QoS->adaptive_qos field which is key/value map. See Interface_QoS OVSDB table section for the description of the defined configuration parameters. Adaptive QoS is thus enabled when Interface_QoS->adaptive_qos has config defined for an UL and for a DL interface.

Optionally, the AdaptiveQoS table then allows configuring custom global (not per interface) Adaptive QoS configuration parameters.

Southbound API

Requirements

  • Acceleration is not supported at this stage.

  • Acceleration must be turned off (skipaccel rule pushed by the controller).

  • Adaptive QoS requires cake scheduler support: sch_cake was upstreamed from kernel 4.19 onwards.