Hardware NAT For LEDE

I used Luci "Always use 40MHz channels even if the secondary channel overlaps. Using this option does not comply with IEEE 802.11n-2009!" to force link aggregation as I have no neighborhood. Clearly a limitation on client side:
[ 4] 0.00-80.00 sec 917 MBytes 96.2 Mbits/sec sender
[ 4] 0.00-80.00 sec 917 MBytes 96.2 Mbits/sec receiver

You haven't tested NAT yet

Anyone try this on a TP-Link C7 yet?

If you want it I can build it and you can do the testing

That would be great (built-in Luci preferred). I can provide some NAT testing with the Archer C7 v2.

Same here...

Works fine, thanks

Used hardware: SSDs and 1GBit Ethernet on both clients
First NAT test results: 400MBit/s at 1.2 load

Unfortunately I forgot to take tests before flashing the NAT-image and somehow the bandwidth is limited by other factors than the router (client to client over switch gets same results).
I'll continue testing when I have more time...

Important: Does ssdk init properly in the dmesg?

There are those 5 lines mentioning ssdk:

[ 58.181843] ssdk_plat_init start
[ 58.185618] Register QCA PHY driver
[ 58.191725] PHY ID is 0x4dd034
[ 58.291231] qca probe f1 phy driver succeeded!
[ 58.295749] qca-ssdk module init succeeded!

LEDE-17.01/April-2017-Image Add Archer C7 v2 with Hardware NAT
Bootloop after PPPoE connection.

Are you able to get any logs at all (I know it is difficult with bootloop)?
I do not have PPPoE so I cannot do any testing on that.

I can not get logs because the router reboots immediately after the pppoe connection is established. I tried factory reset and router works until pppoe setup.

PS
But before pppoe setup in log I see mentioned here lines.

@ReMicroN;

r00t has some great advice for debugging bootloop issues in another thread:

I connected over SSH and ran logread -f to watch the logs go by. Putty keeps the last messages open and ready to copy when the router disconnects.

kernel.panic might also be one of his custom sysctl settings, add it yourself if it's not in this version of the firmware.

@gwlim I ran some Archer C7 tests too. It looks like HW NAT is active and makes a difference, but right now I'm seeing a performance degradation. [tests removed because the degraded performance was caused by other factors]

NAT with other firmware - 484mb/s
NAT on the April NAT build - 395mb/s - with a lot of jitter in the speed

Direct switch connection - 891mb/s - probably not relevant but doing the extra test was easy

C7 Direct Switch-Switch

PS C:\Users\denis\Desktop\iperf-3.1.3-win64> .\iperf3.exe iperf -c 10.5.1.15
Connecting to host 10.5.1.15, port 5201
[  4] local 10.5.1.184 port 51425 connected to 10.5.1.15 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   106 MBytes   889 Mbits/sec
[  4]   1.00-2.00   sec   105 MBytes   882 Mbits/sec
[  4]   2.00-3.00   sec   106 MBytes   892 Mbits/sec
[  4]   3.00-4.00   sec   106 MBytes   893 Mbits/sec
[  4]   4.00-5.00   sec   106 MBytes   893 Mbits/sec
[  4]   5.00-6.00   sec   106 MBytes   892 Mbits/sec
[  4]   6.00-7.00   sec   106 MBytes   892 Mbits/sec
[  4]   7.00-8.00   sec   106 MBytes   893 Mbits/sec
[  4]   8.00-9.00   sec   106 MBytes   891 Mbits/sec
[  4]   9.00-10.00  sec   106 MBytes   891 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.04 GBytes   891 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.04 GBytes   891 Mbits/sec                  receiver

C7 Without Hardware NAT

PS C:\Users\denis\Desktop\iperf-3.1.3-win64> .\iperf3.exe iperf -c 10.5.1.15
Connecting to host 10.5.1.15, port 5201
[  4] local 192.168.1.184 port 50634 connected to 10.5.1.15 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  59.8 MBytes   501 Mbits/sec
[  4]   1.00-2.00   sec  58.5 MBytes   491 Mbits/sec
[  4]   2.00-3.00   sec  59.5 MBytes   499 Mbits/sec
[  4]   3.00-4.00   sec  57.4 MBytes   481 Mbits/sec
[  4]   4.00-5.00   sec  59.6 MBytes   500 Mbits/sec
[  4]   5.00-6.00   sec  58.2 MBytes   489 Mbits/sec
[  4]   6.00-7.00   sec  49.4 MBytes   414 Mbits/sec
[  4]   7.00-8.00   sec  57.1 MBytes   479 Mbits/sec
[  4]   8.00-9.00   sec  59.4 MBytes   499 Mbits/sec
[  4]   9.00-10.00  sec  58.2 MBytes   489 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   577 MBytes   484 Mbits/sec                  sender
[  4]   0.00-10.00  sec   577 MBytes   484 Mbits/sec                  receiver

C7 With Hardware NAT

<this test has been removed because I didn't account for other traffic on the network and it was innacurate>

I am suspecting the crash has to do with PPPoE multilink not selected because I remember patching PPPoE with Multi-link Export Symbols but the default PPP in LEDE is without MultiLink built. I will build 1 with multi-link later.

As for the jitter issue can I know what are the firmware you tested?
NAT with other firmware - 484mb/s <- Standard LEDE built or my LEDE patch but without NAT?
NAT on the April NAT build - 395mb/s - with a lot of jitter in the speed <- any other stuff that might impact performance like leaving the LuCI management Page open while doing the benchmark?

I am pretty sure NAT is not active at all until someone know the ssdk_sh commands (from the ssdk-shell).
the relevant entries should be visible in ssdk_sh if NAT is properly enabled.

Test builts with multilink enabled in ppp
https://github.com/gwlim/Openwrt_Firmware/tree/master/Mutilink-Test

It was r00t's custom C7 build, which is trunk from Apr 8 with a bunch of patches & optimizations but none of your HW NAT patches. Now that I think about it, that build might be showing better-than trunk performance because of the patches.

It's possible that there was something else going on, but would that make a 100mb/s difference?

.....

Now that I think about it, my internet line is 100m, so maybe my laptop decided to download updates while I was running the second test :expressionless:

I did check dmesg and saw the ssdk initialization messages in there, so that part is definitely working.

Reading through the whole thread again, it sounds like the process to enable HW NAT hasn't been totally figured out yet.

I'm happy to poke around in ssdk_sh and provide feedback on what I see for the C7, would love to get some guidance on what I'm looking for though.

Actually r00t incorporated some of my patches

lede-ar71xx-generic-tl-wdr3600-v1-squashfs-factory.bin Upload April LEDE Image

also crashes on pppoe