For lower speeds we need to adjust the bandwidth
calculation for QoS to work on AN7581 (but not on EN7523).
Also make sure we clear old HW NAT entries if the uplink
bandwidth changes for QoS to take effect immediaty.
Create a centralized setup for ebtables.
This is necessary to garantuee the order
of how chains are created.
Right now it provides a 1:1 drop-in
replacement of how things currently work
and no changes are needed in the short term.
Create a centralized setup for ebtables.
This is necessary to garantuee the order
of how chains are created.
Right now it provides a 1:1 drop-in
replacement of how things currently work
and no changes are needed in the short term.
Instead of using the MAC to do ratelimting we use
the Frame Engine, this has the the benefit of being
a more universal solution and will work with PON
without the need of implement APIs for using the
MAC to do ingress ratelimiting.
This also works fine when the integrated switch
is used as the wan.
When doing classification on DSCP values we need
to ensure that the values are correctly hashed
for the L3 HW NAT, otherwise identical flows
with different DSCP values will end up with
the same QoS priority and queue.
The L3 HW NAT will match the flows based on an IP header 5-tuple.
However if we are doing classification based on p-bits at the
same time and we want to use this for QoS we need to make sure
to add a VIP packet matcher to send this info to the PPE for
hashing the flow.
In order for the QoS engine to know how much bandwidth
the uplink has got we need to set this with 'qosrule'
every time the uplink changes. Otherwise SP scheduling
will fail.
This fix takes into account when ae_wan is used by ethernet,
fiber and when PON is used as the uplink.
On GenXOS we've for some time used this to avoid running
into issues with reloading scripts at the same time.
Adding the same functionality to feeds/iopsys.
In the default qos config on qcm, the burst size is 1500 while on
other targets its 0, this is incorrect since iowrt uci defaults
should be consistent across targets; fixed with this commit.
Problem description:
Two problems seen,
1. The queues on a port are by default setup to handle maximum rate
of 1G
2. The default qos uci config for ipq and brcm platforms is different
Fix:
1. Update queue setup to not impose rate limit, by setting the maximum
rate as the maximum rate supported by the port.
2. Update uci default script to generate same config for both platforms