Hardware NAT For LEDE

Actually I did binwalked.
If you check the code snippets komawoyo posted ipq40xx.c it explains everything.
Vendors embbed ssdk_sh the QCA Shell Program command in their binaries and compile them so it ends up being a binary blob and we can't see the list of commands required to configure an nat.
I would urge you guys to try ssdk_sh command shell
Type ssdk_sh then Enter
Then in the shell type nat then ? it will show you the subsequent command options.
ssdk_sh is actually a very powerful switch tool because all the switch options are inside including port mirror, pid, vid etc
There are multiple nat mode like full cone, restricted cone, symmetric etc
We need to know
1)What type of nat and how to set the mode correctly
2)Set WAN Command
3)Set LAN Command
4)Sequence of command
5)ability to forward unknown packets that nat cannot handle to upper layers
6)Test if it works
Meanwhile I struggle with WDR4900

Here's all the commands for ssdk_sh - its pretty easy to follow by looking at the source

Here's a snippet

    /* NAT */
#ifdef IN_NAT
    {
        "nat", "config nat",
        {
            {"natentry", "set", "add nat entry", "", SW_API_NAT_ADD, NULL},
            {"natentry", "add", "add nat entry", "", SW_API_NAT_ADD, NULL},
            {"natentry", "del", "del nat entry", "<del_mode>", SW_API_NAT_DEL, NULL},
            {"natentry", "next", "next nat entry", "<next_mode>", SW_API_NAT_NEXT, NULL},
            {"natentry", "bindcnt", "bind counter to nat entry", "<nat entry id> <cnt id> <enable|disable>", SW_API_NAT_COUNTER_BIND, NULL},
            {"naptentry", "set", "add napt entry", "", SW_API_NAPT_ADD, NULL},
            {"naptentry", "add", "add napt entry", "", SW_API_NAPT_ADD, NULL},
            {"naptentry", "del", "del napt entry", "<del_mode>", SW_API_NAPT_DEL, NULL},
            {"naptentry", "next", "next napt entry", "<next_mode>", SW_API_NAPT_NEXT, NULL},
            {"naptentry", "bindcnt", "bind counter to napt entry", "<napt entry id> <cnt id> <enable|disable>", SW_API_NAPT_COUNTER_BIND, NULL},
            {"natstatus", "set", "set nat status", "<enable|disable>", SW_API_NAT_STATUS_SET, NULL},
            {"naptstatus", "set", "set napt status", "<enable|disable>", SW_API_NAPT_STATUS_SET, NULL},
            {"nathash", "set", "set nat hash mode", "<flag>", SW_API_NAT_HASH_MODE_SET, NULL},
            {"naptmode", "set", "set napt mode", "<fullcone|strictcone|portstrict|synmatric>", SW_API_NAPT_MODE_SET, NULL},
            {"prvbaseaddr", "set", "set nat prv base address", "<ip4 addr>", SW_API_PRV_BASE_ADDR_SET, NULL},
            {"prvaddrmode", "set", "set nat prv address map mode", "<enable|disable>", SW_API_PRV_ADDR_MODE_SET, NULL},
            {"pubaddr", "set", "add pub address", "", SW_API_PUB_ADDR_ENTRY_ADD, NULL},
            {"pubaddr", "add", "add pub address", "", SW_API_PUB_ADDR_ENTRY_ADD, NULL},
            {"pubaddr", "del", "del pub address", "<del_mode>", SW_API_PUB_ADDR_ENTRY_DEL, NULL},
            {"natunksess", "set", "set nat unkown session command", "<forward|drop|cpycpu|rdtcpu>", SW_API_NAT_UNK_SESSION_CMD_SET, NULL},
            {"prvbasemask", "set", "set nat prv base mask", "<ip4 mask>", SW_API_PRV_BASE_MASK_SET, NULL},
			{"global", "set", "set global nat function", "<enable|disable> <enable:sync counter|disable:unsync counter>", SW_API_NAT_GLOBAL_SET, NULL},
			{"flowentry", "set", "add flow entry", "", SW_API_FLOW_ADD, NULL},
			{"flowentry", "add", "add flow entry", "", SW_API_FLOW_ADD, NULL},
            {"flowentry", "del", "del flow entry", "<del_mode>", SW_API_FLOW_DEL, NULL},
			{"flowentry", "next", "next flow entry", "<next_mode>", SW_API_FLOW_NEXT, NULL},
			{"flowcookie", "set", "set flow cookie", "", SW_API_FLOW_COOKIE_SET, NULL},
			{"flowrfs", "set", "set flow rfs", "<action>", SW_API_FLOW_RFS_SET, NULL},
            {NULL, NULL, NULL, NULL, (int)NULL, NULL}/*end of desc*/
        },
    },
#endif

April Release is out in the same repo.
Updated some stuff changelog in the git repo
I also added LEDE for WR1043ND with all applicable optimizations

Which hardware versions of WR1043ND are supported? Is v4 supported too? It has a bit different CPU then v2 and v3.

WR1043NDv1 which I have based on my mips24kc optimizations
If you want WR1043NDv2 or later which uses mips74 I can build on request but can't test

I didn't know WR1043NDv1 even had HNAT support in hardware. I didn't see any option for that on the stock firmware. I though only v2 and newer have HNAT capabilities.

Nope it doesn't have but not all my patch are about hardware nat some are mips optimization, I build WR1043NDv1 becos I still have the hardware and it has fantastic wireless n radios.

You can use my mips74k git patcher to build for WR1043NDv2 and above

Great job. I installed April binaries on a WDR 3600 v1.5
Everything seems to be fine.

Is there anything special to enable hardware nat after flashing?
How can I test output? Using iPerf?

ssh into the router then type in
ssdk_sh
hit enter
then type ?
hit enter
try to understand and guess the nat commands

OK thanks.

I tried to test from a MacOSx iPerf3 client on Gig-ethernet to WDR-3600 using iPerf3 server listening on 5201

Accepted connection from 192.168.0.101, port 52968
[  5] local 192.168.0.1 port 5201 connected to 192.168.0.101 port 52969
    [ ID] Interval           Transfer     Bandwidth
    [  5]   0.00-1.00   sec  25.1 MBytes   210 Mbits/sec                  
    [  5]   1.00-2.01   sec  24.9 MBytes   207 Mbits/sec                  
    [  5]   2.01-3.00   sec  24.9 MBytes   210 Mbits/sec                  
    [  5]   3.00-4.00   sec  24.1 MBytes   202 Mbits/sec                  
    [  5]   4.00-5.00   sec  25.1 MBytes   210 Mbits/sec                  
    [  5]   5.00-6.00   sec  24.9 MBytes   209 Mbits/sec                  
    [  5]   6.00-7.01   sec  25.1 MBytes   210 Mbits/sec                  
    [  5]   7.01-8.00   sec  24.6 MBytes   207 Mbits/sec                  
    [  5]   8.00-9.00   sec  24.4 MBytes   205 Mbits/sec                  
    [  5]   9.00-10.00  sec  24.8 MBytes   209 Mbits/sec                  
    [  5]  10.00-10.02  sec   452 KBytes   175 Mbits/sec                  
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth
    [  5]   0.00-10.02  sec  0.00 Bytes  0.00 bits/sec                  sender
    [  5]   0.00-10.02  sec   248 MBytes   208 Mbits/sec                  receiver

So to my understanding, there is no NAT involved here ...

To make an actual test, I probably need to setup VLANs. Do paquets need NAT to go from one VLAN to another?

You are testing wireless right?
192.168.0.1 --> 192.168.0.101

NAT is
192.168.0.1 --> 175.X.X.X <- Different subnet

Thanks gwlim.
First, I was testing wired network, not wireless. Wireless comes now ...

Wired should saturate at gigabit speeds not 200Mbps
Unless you are using some USB ethernet dongle that limits bandwidth

I am connecting from MacOSX Sierra (latest) to WDR3600 with direct ethernet link.

On WDR3600:
$iperf3 -s

On MacOSX:
$iperf -c 192.168.0.1

Now testing under Linux to see if it gets better.

You don't put iperf in the router
If you do that you are putting loads on the router
You put iperf on 2 client computers

Understood. Testing again.

1 Like

I wrote this about > 2 years back I think still applicable
https://wiki.openwrt.org/doc/howto/benchmark.nat

Thanks. Following your recommendation I am now testing output using iPerf3 from host to host:

  1. Wired network
    ./iperf3 -c 192.168.0.100
    Connecting to host 192.168.0.100, port 5201
    [ 4] local 192.168.0.101 port 53099 connected to 192.168.0.100 port 5201
    [ ID] Interval Transfer Bandwidth
    [ 4] 0.00-1.00 sec 113 MBytes 947 Mbits/sec
    [ 4] 1.00-2.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 2.00-3.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 3.00-4.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 5.00-6.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 6.00-7.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 7.00-8.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 8.00-9.00 sec 112 MBytes 940 Mbits/sec
    [ 4] 9.00-10.00 sec 112 MBytes 940 Mbits/sec

[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.09 GBytes 941 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.09 GBytes 941 Mbits/sec receiver
Works great !

  1. Using WIFI (not ac):./iperf3 -c 192.168.0.100
    Connecting to host 192.168.0.100, port 5201
    [ 4] local 192.168.0.134 port 53109 connected to 192.168.0.100 port 5201
    [ ID] Interval Transfer Bandwidth
    [ 4] 0.00-1.00 sec 8.46 MBytes 71.0 Mbits/sec
    [ 4] 1.00-2.00 sec 8.63 MBytes 72.4 Mbits/sec
    [ 4] 2.00-3.00 sec 7.69 MBytes 64.4 Mbits/sec
    [ 4] 3.00-4.00 sec 7.70 MBytes 64.6 Mbits/sec
    [ 4] 4.00-5.00 sec 8.42 MBytes 70.7 Mbits/sec
    [ 4] 5.00-6.00 sec 8.06 MBytes 67.6 Mbits/sec
    [ 4] 6.00-7.00 sec 6.77 MBytes 56.7 Mbits/sec
    [ 4] 7.00-8.00 sec 7.93 MBytes 66.6 Mbits/sec
    [ 4] 8.00-9.00 sec 8.75 MBytes 73.3 Mbits/sec
    [ 4] 9.00-10.00 sec 8.47 MBytes 71.0 Mbits/sec

[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 80.9 MBytes 67.8 Mbits/sec sender
[ 4] 0.00-10.00 sec 80.9 MBytes 67.8 Mbits/sec receiver

And from Android : only 45Mbits/sec

For Wireless N there is 20 and 40HT (Fat Channel) Modes

So for a 300Mbps Wireless Link Rate if you enable fat channel you can maximium obtain 150Mbps actual data rate
If you do not enable fat channel it is halfed to 150/2 Mbps

It also depends if you have noisy neighbours as well as if your client network adapter supports fat channel

Thanks, there is no neibourhood.

The best I can get is on MacOSX:
.0 Mbit/s, 20MHz
144.4 Mbit/s, 20MHz, MCS 15, Short GI

It results in:
./iperf3 -c 192.168.0.100
Connecting to host 192.168.0.100, port 5201
[ 4] local 192.168.0.134 port 53443 connected to 192.168.0.100 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 11.6 MBytes 97.6 Mbits/sec
[ 4] 1.00-2.00 sec 12.3 MBytes 103 Mbits/sec
[ 4] 2.00-3.00 sec 11.3 MBytes 94.6 Mbits/sec
[ 4] 3.00-4.00 sec 11.9 MBytes 99.9 Mbits/sec
[ 4] 4.00-5.00 sec 11.7 MBytes 98.1 Mbits/sec
[ 4] 5.00-6.00 sec 11.9 MBytes 99.6 Mbits/sec
[ 4] 6.00-7.00 sec 11.9 MBytes 99.8 Mbits/sec
[ 4] 7.00-8.00 sec 11.2 MBytes 93.8 Mbits/sec
[ 4] 8.00-9.00 sec 11.3 MBytes 94.7 Mbits/sec
[ 4] 9.00-10.00 sec 11.6 MBytes 97.3 Mbits/sec


[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 117 MBytes 97.8 Mbits/sec sender
[ 4] 0.00-10.00 sec 117 MBytes 97.8 Mbits/sec receiver

I think this is a limitation on client side, not LEDE.