this repo has no description
1{
2 "id": "https://ryan.freumh.org/ilnp-interface.html",
3 "title": "ILNP Interface",
4 "link": "https://ryan.freumh.org/ilnp-interface.html",
5 "updated": "2022-10-22T00:00:00",
6 "published": "2021-10-16T00:00:00",
7 "summary": "<div>\n \n <span>Published 16 Oct 2021.</span>\n \n \n <span>Last update 22 Oct 2022.</span>\n \n </div>\n \n <div> Tags: <a href=\"/projects.html\" title=\"All pages tagged 'projects'.\">projects</a>. </div>\n \n \n\n \n<p><span>My <a href=\"./network-layer-mobility.html\">dissertation</a> involved\nimplementing an <a href=\"./network-layer-mobility.html#ilnp\">Identifier-Locator Network\nProtocol</a> (ILNP) <a href=\"./network-layer-mobility.html#overlay-network\">overlay network</a>\nin Python which can be found at <a href=\"https://github.com/RyanGibb/ilnp-overlay-network\">github.com/RyanGibb/ilnp-overlay-network</a>.</span></p>\n<p><span>As part of this, I wanted to add an application\nlayer interface to the overlay to support existing applications. (To\nthose who still want to know why I posit, why not?) That is,\napplications other than those written in python specifically for the\noverlay. This would also allow multiple applications to run over one\noverlay network stack. However, this wasn’t a priority for my\ndissertation as it wasn’t necessary to obtain experimental\nresults.</span></p>\n<p><span>Since graduating I’ve found a few weekends to work\non this and a solution will be explored in this blog post.</span></p>\n<h2>Datagrams</h2>\n<p><span>First up, how can we send a datagram over\nthis overlay network?</span></p>\n<p><span>We already provide a Python socket\ninterface with the skinny transport protocol (STP), which wraps an ILNP\npacket in a port for demultiplexing, very similar to UDP. But this\nrequires importing <code>transport.py</code> and instantiating a whole\noverlay stack. We could support applications other than Python with some\nsort of inter-process communication (like Unix domain sockets), but this\nwould only solve one of our problems. It would allow applications\nwritten in other languages to use our overlay, but it will still require\nwriting applications specifically to use our overlay.</span></p>\n<p><span>Instead, to provide an interface that\nexisting applications can use, we can use a local UDP port as a proxy\ninto our overlay. This will require a program to instantiate the overlay\nstack and proxy data from the UDP port to the overlay. We’ll call this\nprogram <code>proxy.py</code>.</span></p>\n<p><span>However, this local proxy will require\nadding some connection state to a stateless communication protocol. When\n<code>proxy.py</code> receives a packet how will it know what virtual\nhostname (which are different to the underlay hostnames), and STP port,\nto send it to? We’ll call this combination of hostname and port the\n‘remote’.</span></p>\n<p><span>We could have a default remote hard\ncoded, but this would only allow one communication channel. So instead\nwe will have a mapping from local ports to remotes, where the local port\nis the port of the UDP socket connecting to our listening UDP socket. To\nallow these mappings to be dynamic we’ll use out-of-band communication\nand have <code>proxy.py</code> listening on a unix domain socket\n<code>./sock</code> for new mappings. As we don’t have any restrictions\non the STP ports we’re using in our overlay, we might as well use a\n1-to-1 mapping of UDP ports to STP ports to simplify things.</span></p>\n<p><span>An ILNP overlay aware application could\ncreate a mapping itself, but to support existing programs we can\nmanually create one with:</span></p>\n<pre><code>$ python proxy_create.py LOCAL_PORT REMOTE_HOSTNAME REMOTE_PORT</code></pre>\n<p><span>Now receiving is very simple. We just\nspawn a thread for every ILNP STP socket and when we receive a packet on\nthis socket we forward with UDP to the corresponding port locally. Note\nthat a socket doesn’t necessarily have to send packets to our overlay to\nreceive packets from it, but a mapping does have to exist for its\nport.</span></p>\n<p><span>So our local UDP proxy operating with 3\nmappings would loop like:</span></p>\n<p><img src=\"./images/ilnp-proxy.svg\"></p>\n<p><span>Where a, b, and c can be any free\nport.</span></p>\n<p><span>We could have a separate listening port\nfor every connection, which would allow any source port, but this would\nrequire double the number of ports and threads in use, as well as\nrequiring keeping track of additional mappings between these listening\nports and client ports. Having only one listening UDP socket greatly\nsimplifies the design of the proxy.</span></p>\n<p><span>See <a href=\"https://github.com/RyanGibb/ilnp-overlay-network/blob/master/src\">github.com/RyanGibb/ilnp-overlay-network/blob/master/src</a>\nfor the implementation of <code>proxy.sh</code> and\n<code>proxy_create.py</code>.</span></p>\n<h2>Does it work?</h2>\n<p><span>This is all great in theory, but does\nit work in practice?</span></p>\n<p><span>Unfortunately, I don’t have access to\nthe Raspberry Pi testbed that I used for my dissertation’s <a href=\"./network-layer-mobility.md/#experiments\">experiments</a> anymore.\nLuckily at the time of experimenting with this (but not at the time of\nwriteup), I had access to my current laptop <code>ryan-laptop</code>, an\nold tower PC <code>ryan-pc</code>, and an old HP laptop\n<code>hp-laptop</code> being used as a server, all connected to the same\nnetwork (important for multicast) using IEEE 801.11. I have\n<code>ryan-laptop</code> and <code>ryan-pc</code> running Arch Linux,\nand <code>hp-laptop</code> running Ubuntu Server 21.04.</span></p>\n<p><span>The only modifications required were a\nconfiguration change to the <code>mcast_interface</code>, and a one\ncharacter <a href=\"https://github.com/RyanGibb/ilnp-overlay-network/commit/43eba661585d0fbd159c0e7e8777f095deb2d592\">fix</a>\n(arguably more of a hack) to get the machines IP address on the\n<code>mcast_interface</code>.</span></p>\n<p><span>We’ll leave the overlay network\ntopology as it was in the experiments:</span></p>\n<p><img src=\"./images/network-layer-mobility/diagrams/experiment.svg\"></p>\n<p><span>With <code>ryan-laptop</code> as the\nmobile node (MN), <code>ryan-pc</code> as the corresponding node (CN),\nand <code>hp-laptop</code> as the router. This topology and mobility is\ntransparent to the programs proxied through our overlay, as well as the\nproxy itself.</span></p>\n<p><span>First, we’ll create the two proxy\nsockets on port 10000 redirecting to our overlay at both endpoints,\n<code>ryan-laptop</code> and <code>ryan-pc</code>:</span></p>\n<pre><code>ryan-laptop $ python proxy.py ../config/config.ini 10000\n\nryan-pc $ python proxy.py ../config/config.ini 10000</code></pre>\n<p><span>Then create the mappings:</span></p>\n<pre><code>ryan-laptop $ python proxy_create.py 10000 ryan-pc 10001\n\nryan-pc $ python proxy_create.py 10000 ryan-laptop 10001</code></pre>\n<p><span>We will also require running the proxy\nwithout any mappings on <code>hp-laptop</code> to instantiate the ILNP\nstack so it can forward packets:</span></p>\n<pre><code>hp-laptop $ python proxy.py</code></pre>\n<p><span>Now on both endpoints we can run\nnetcat to listen for UDP packets from 10000 on port 10001, and they can\ncommunicate!</span></p>\n<pre><code>ryan-laptop $ nc -u 127.0.0.1 10000 -p 10001\nhello,\nworld\n\nryan-pc $ nc -u 127.0.0.1 10000 -p 10001\nhello,\nworld</code></pre>\n<p><span>We could replace netcat with any other\napplication interfacing with a UDP socket as long as we know its source\nport. If we don’t have a predictable source port, we could just proxy it\nthrough netcat to provide one.</span></p>\n<p><span>Through this, we can have\nbidirectional datagram communication over our overlay network using a\nlocal UDP proxy.</span></p>\n<h2>Streams</h2>\n<p><span>Datagrams are great and all, but can we\nhave a reliable ordered bytestream over our overlay?</span></p>\n<p><span>We could follow a similar approach to what\nwe did with datagrams. That is, proxy TCP connections over our overlay.\nBut this would not provide reliability; or rather this would only\nprovide reliable delivery locally to our TCP proxy. Despite emphasising\nthe lack of loss in our overlay, this was a lack of loss due to\nmobility. It doesn’t prevent loss due to congestion, link layer\nfailures, or cosmic rays…</span></p>\n<p><span>In a similar way to how our skinny\ntransport protocol emulates UDP, we could add a transport layer protocol\nemulating TCP that provides a reliable, ordered, bytestream to our\noverlay. But this is a lot of work.</span></p>\n<p><span>UDP is essentially a port wrapped around an\nIP packet for demultiplexing. What if we could treat our unreliable\ndatagram as an IP packet, and run a transport layer protocol providing a\nreliable ordered bytestream on top of it? That would solve both problems\n- provide reliable delivery and not require reinventing the\nwheel.</span></p>\n<p><span>QUIC, implemented in 2012, and defined in\n<a href=\"https://datatracker.ietf.org/doc/html/rfc9000\">RFC9000</a>, is\nthe first that springs to mind. This is a transport layer protocol\nintended to provide performant and secure HTTP connections. To get\naround various protocol ossification problems, including NAT traversal,\nQUIC runs over UDP. This works to our benefit as if we could proxy QUIC\nto send UDP packets over our overlay this would be perfect for our use\ncase.</span></p>\n<p><span>However, QUIC only exists as a <a href=\"https://github.com/quicwg/base-drafts/wiki/Implementations\">number\nof userspace implementations</a>. This has great benefits for\ndevelopment, but means we would be back to a raw userspace socket\ninterface that we couldn’t use existing programs with. We could write\nanother proxy from applications to a QUIC userspace process, but let’s\nsee if we can do better.</span></p>\n<p><span>A slightly older protocol Stream Control\nTransmission Protocol (SCTP), defined in <a href=\"https://datatracker.ietf.org/doc/html/rfc4960\">RFC4960</a>, is a\nbetter solution. SCTP is a stream based transport layer protocol with\nsome benefits over TCP, like multistreaming. It’s worth noting that\nthere are a lot of parallels between what SCTP and ILNP provide, like\nmobility and multihoming, just implemented at different layers of the\nnetwork stack.</span></p>\n<p><span>But what we really care about is defined in\n<a href=\"https://datatracker.ietf.org/doc/html/rfc6951\">RFC6951</a>.\nThis extension to SCTP provides an option to encapsulate SCTP packets in\nUDP packets instead of IP packets. The main purpose of this extension is\nto allow SCTP packets to traverse ‘legacy’ NAT - the same reason QUIC\nuses UDP - but it also means we can proxy SCTP encapsulated in UDP over\nour overlay!</span></p>\n<p><span>There is a <a href=\"https://github.com/sctplab/usrsctp\">userspace implementation of\nSCTP</a>, but it only provides a userspace socket interface in C++.\nFortunately the Linux kernel has\n<a href=\"https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/net/sctp/?id=v5.11&id2=v5.10\">implemented</a>\nRFC6951 in <a href=\"https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.11\">version\n5.11</a>, released February 2021, and the nmap suite have included\nsupport for SCTP in their ncat utility (a spiritual successor to\nnetcat).</span></p>\n<p><span>Note that only the end hosts require SCTP\nsupport, so the fact that <code>hp-laptop</code> is running Ubuntu using\nan older kernel is not an issue.</span></p>\n<p><span>SCTP UDP encapulsation uses a\n<code>udp_port</code> and <code>encap_port</code>. From the <a href=\"https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html\">sysctl\nkernel documentation</a>:</span></p>\n<pre><code>udp_port - INTEGER\n\nThe listening port for the local UDP tunnelling sock. Normally it’s using the IANA-assigned UDP port number 9899 (sctp-tunneling).\n\nThis UDP sock is used for processing the incoming UDP-encapsulated SCTP packets (from RFC6951), and shared by all applications in the same net namespace.\n\nThis UDP sock will be closed when the value is set to 0.\n\nThe value will also be used to set the src port of the UDP header for the outgoing UDP-encapsulated SCTP packets. For the dest port, please refer to ‘encap_port’ below.</code></pre>\n<p><span>encap_port - INTEGER</span></p>\n<pre><code>The default remote UDP encapsulation port.\n\nThis value is used to set the dest port of the UDP header for the outgoing UDP-encapsulated SCTP packets by default. Users can also change the value for each sock/asoc/transport by using setsockopt. For further information, please refer to RFC6951.\n\nNote that when connecting to a remote server, the client should set this to the port that the UDP tunneling sock on the peer server is listening to and the local UDP tunneling sock on the client also must be started. On the server, it would get the encap_port from the incoming packet’s source port.</code></pre>\n<p><span>As we want to intercept the SCTP UDP\npackets for proxying over our overlay, we won’t use the IANA-assigned\n9899 port for these variables. Instead, we’ll use ncat to intercept\noutgoing SCTP UDP packets (sent to <code>udp_port</code>) proxying them\nover our overlay, and to forward received SCTP UDP packets to\n<code>encap_port</code>, where the kernel SCTP implementation will be\nlistening. It’s worth noting that this will likely break any other\napplications using SCTP.</span></p>\n<h2>Putting it all together</h2>\n<p><span>On both\n<code>ryan-laptop</code> and <code>ryan-pc</code> we configure the\nkernel SCTP implementation’s listening port and outgoing destination\nport:</span></p>\n<pre><code># UDP listening port\n$ sudo sysctl -w net.sctp.encap_port=10002\n# UDP dest port\n$ sudo sysctl -w net.sctp.udp_port=10003</code></pre>\n<p><span>To redirect outgoing SCTP\nUDP packets over the overlay we’ll redirect packets destined for port\n10002 to the overlay with source port 10002:</span></p>\n<pre><code>$ ncat -u -l 10002 -c "ncat -u 127.0.0.1 10001 -p 10002" --keep-open</code></pre>\n<p><span>Proxy mappings redirecting\npackets from local port <code>encap_port</code> to remote port\n<code>udp_port</code>:</span></p>\n<pre><code>ryan-pc: % python proxy_create.py 10002 alice 10003\nryan-laptop: % python proxy_create.py 10002 bob 10003</code></pre>\n<p><span>And as control messages\nwill be exchanged between the two SCTP instances we’ll also require\nredirecting packets from local port <code>encap_port</code> to remote\nport <code>encap_port</code>.</span></p>\n<pre><code>ryan-pc: % python proxy_create.py 10003 alice 10003\nryan-laptop: % python proxy_create.py 10003 bob 10003</code></pre>\n<p><span>Now we can run ncat with\nSCTP :-)</span></p>\n<pre><code>ryan-laptop $ ncat --sctp -l 9999\nhello,\nworld\n\nryan-pc $ ncat --sctp 127.0.0.1 9999\nhello,\nworld</code></pre>\n<p><span>But this <em>still</em>\ndoesn’t allow us to use existing applications using a standard TCP\nsocket over our overlay. For this, we turn to\n<code>ssh</code>.</span></p>\n<p><span>On both end points we can\nrun:</span></p>\n<pre><code>$ ncat --sctp -l 9999 -c "ncat 127.0.0.1 22" --keep-open</code></pre>\n<p><span>Which will use ncat to send\nsctp data to port 22, used for ssh.</span></p>\n<p><span>With an openssh server\nconfigured on the machine we can then use:</span></p>\n<pre><code>$ ssh -o "ProxyCommand ncat --sctp 127.0.0.1 9999" -N -D 8080 localhost</code></pre>\n<p><span>To connect via ssh over our\noverlay.</span></p>\n<p><span>And if we have ssh… we\nhave anything!</span></p>\n<p><span>That is, we can create a\nSOCKS proxy to send anything over our overlay. For example, we can\ncreate a proxy:</span></p>\n<pre><code>$ ssh -o "ProxyCommand ncat --sctp 127.0.0.1 9999" -N -D 8080 localhost</code></pre>\n<p><span>And then configure your\nweb browser of choice to use this proxy.</span></p>\n<p><span>Alternatively, one could\nalso proxy a raw TCP connection on port <code>PORT</code> over SCTP and\nour overlay with:</span></p>\n<pre><code>$ ncat -l PORT -c "ncat --sctp 127.0.0.1 9999" --keep-open</code></pre>\n<h2>Taking a step back</h2>\n<p><span>Putting all the pieces together,\nthe network stack looks something like:</span></p>\n<p><img src=\"./images/bin.jpg\"></p>\n<p><span>Just kidding. But not really.\nAll these proxies and overlays obviously have performance\nimplications.</span></p>\n<p><span>As David Wheeler said, “All\nproblems in computer science can be solved by another level of\nindirection, except for the problem of too many layers of\nindirection.”</span></p>\n<p><span>But hey, it works!</span></p>\n<p><span>Here’s the actual network stack\na SOCKS proxy over our overlay:</span></p>\n<p><img src=\"./images/ilnp-network-stack.svg\"></p>\n<p><span>The various proxying and\nmappings are not depicted.</span></p>\n<h2>Further Reading</h2>\n<p><span>Some interesting reads that are\nrelated and tangentially related, respectively, to this\nproject.</span></p>\n<ul>\n<li>On QUIC and SCTP: <a href=\"https://lwn.net/Articles/745590/\">https://lwn.net/Articles/745590/</a>\n</li>\n<li>On NAT traversal: <a href=\"https://tailscale.com/blog/how-nat-traversal-works/\">https://tailscale.com/blog/how-nat-traversal-works/</a></li>\n</ul>",
8 "content": "<div>\n \n <span>Published 16 Oct 2021.</span>\n \n \n <span>Last update 22 Oct 2022.</span>\n \n </div>\n \n <div> Tags: <a href=\"/projects.html\" title=\"All pages tagged 'projects'.\">projects</a>. </div>\n \n \n\n \n<p><span>My <a href=\"./network-layer-mobility.html\">dissertation</a> involved\nimplementing an <a href=\"./network-layer-mobility.html#ilnp\">Identifier-Locator Network\nProtocol</a> (ILNP) <a href=\"./network-layer-mobility.html#overlay-network\">overlay network</a>\nin Python which can be found at <a href=\"https://github.com/RyanGibb/ilnp-overlay-network\">github.com/RyanGibb/ilnp-overlay-network</a>.</span></p>\n<p><span>As part of this, I wanted to add an application\nlayer interface to the overlay to support existing applications. (To\nthose who still want to know why I posit, why not?) That is,\napplications other than those written in python specifically for the\noverlay. This would also allow multiple applications to run over one\noverlay network stack. However, this wasn’t a priority for my\ndissertation as it wasn’t necessary to obtain experimental\nresults.</span></p>\n<p><span>Since graduating I’ve found a few weekends to work\non this and a solution will be explored in this blog post.</span></p>\n<h2>Datagrams</h2>\n<p><span>First up, how can we send a datagram over\nthis overlay network?</span></p>\n<p><span>We already provide a Python socket\ninterface with the skinny transport protocol (STP), which wraps an ILNP\npacket in a port for demultiplexing, very similar to UDP. But this\nrequires importing <code>transport.py</code> and instantiating a whole\noverlay stack. We could support applications other than Python with some\nsort of inter-process communication (like Unix domain sockets), but this\nwould only solve one of our problems. It would allow applications\nwritten in other languages to use our overlay, but it will still require\nwriting applications specifically to use our overlay.</span></p>\n<p><span>Instead, to provide an interface that\nexisting applications can use, we can use a local UDP port as a proxy\ninto our overlay. This will require a program to instantiate the overlay\nstack and proxy data from the UDP port to the overlay. We’ll call this\nprogram <code>proxy.py</code>.</span></p>\n<p><span>However, this local proxy will require\nadding some connection state to a stateless communication protocol. When\n<code>proxy.py</code> receives a packet how will it know what virtual\nhostname (which are different to the underlay hostnames), and STP port,\nto send it to? We’ll call this combination of hostname and port the\n‘remote’.</span></p>\n<p><span>We could have a default remote hard\ncoded, but this would only allow one communication channel. So instead\nwe will have a mapping from local ports to remotes, where the local port\nis the port of the UDP socket connecting to our listening UDP socket. To\nallow these mappings to be dynamic we’ll use out-of-band communication\nand have <code>proxy.py</code> listening on a unix domain socket\n<code>./sock</code> for new mappings. As we don’t have any restrictions\non the STP ports we’re using in our overlay, we might as well use a\n1-to-1 mapping of UDP ports to STP ports to simplify things.</span></p>\n<p><span>An ILNP overlay aware application could\ncreate a mapping itself, but to support existing programs we can\nmanually create one with:</span></p>\n<pre><code>$ python proxy_create.py LOCAL_PORT REMOTE_HOSTNAME REMOTE_PORT</code></pre>\n<p><span>Now receiving is very simple. We just\nspawn a thread for every ILNP STP socket and when we receive a packet on\nthis socket we forward with UDP to the corresponding port locally. Note\nthat a socket doesn’t necessarily have to send packets to our overlay to\nreceive packets from it, but a mapping does have to exist for its\nport.</span></p>\n<p><span>So our local UDP proxy operating with 3\nmappings would loop like:</span></p>\n<p><img src=\"./images/ilnp-proxy.svg\"></p>\n<p><span>Where a, b, and c can be any free\nport.</span></p>\n<p><span>We could have a separate listening port\nfor every connection, which would allow any source port, but this would\nrequire double the number of ports and threads in use, as well as\nrequiring keeping track of additional mappings between these listening\nports and client ports. Having only one listening UDP socket greatly\nsimplifies the design of the proxy.</span></p>\n<p><span>See <a href=\"https://github.com/RyanGibb/ilnp-overlay-network/blob/master/src\">github.com/RyanGibb/ilnp-overlay-network/blob/master/src</a>\nfor the implementation of <code>proxy.sh</code> and\n<code>proxy_create.py</code>.</span></p>\n<h2>Does it work?</h2>\n<p><span>This is all great in theory, but does\nit work in practice?</span></p>\n<p><span>Unfortunately, I don’t have access to\nthe Raspberry Pi testbed that I used for my dissertation’s <a href=\"./network-layer-mobility.md/#experiments\">experiments</a> anymore.\nLuckily at the time of experimenting with this (but not at the time of\nwriteup), I had access to my current laptop <code>ryan-laptop</code>, an\nold tower PC <code>ryan-pc</code>, and an old HP laptop\n<code>hp-laptop</code> being used as a server, all connected to the same\nnetwork (important for multicast) using IEEE 801.11. I have\n<code>ryan-laptop</code> and <code>ryan-pc</code> running Arch Linux,\nand <code>hp-laptop</code> running Ubuntu Server 21.04.</span></p>\n<p><span>The only modifications required were a\nconfiguration change to the <code>mcast_interface</code>, and a one\ncharacter <a href=\"https://github.com/RyanGibb/ilnp-overlay-network/commit/43eba661585d0fbd159c0e7e8777f095deb2d592\">fix</a>\n(arguably more of a hack) to get the machines IP address on the\n<code>mcast_interface</code>.</span></p>\n<p><span>We’ll leave the overlay network\ntopology as it was in the experiments:</span></p>\n<p><img src=\"./images/network-layer-mobility/diagrams/experiment.svg\"></p>\n<p><span>With <code>ryan-laptop</code> as the\nmobile node (MN), <code>ryan-pc</code> as the corresponding node (CN),\nand <code>hp-laptop</code> as the router. This topology and mobility is\ntransparent to the programs proxied through our overlay, as well as the\nproxy itself.</span></p>\n<p><span>First, we’ll create the two proxy\nsockets on port 10000 redirecting to our overlay at both endpoints,\n<code>ryan-laptop</code> and <code>ryan-pc</code>:</span></p>\n<pre><code>ryan-laptop $ python proxy.py ../config/config.ini 10000\n\nryan-pc $ python proxy.py ../config/config.ini 10000</code></pre>\n<p><span>Then create the mappings:</span></p>\n<pre><code>ryan-laptop $ python proxy_create.py 10000 ryan-pc 10001\n\nryan-pc $ python proxy_create.py 10000 ryan-laptop 10001</code></pre>\n<p><span>We will also require running the proxy\nwithout any mappings on <code>hp-laptop</code> to instantiate the ILNP\nstack so it can forward packets:</span></p>\n<pre><code>hp-laptop $ python proxy.py</code></pre>\n<p><span>Now on both endpoints we can run\nnetcat to listen for UDP packets from 10000 on port 10001, and they can\ncommunicate!</span></p>\n<pre><code>ryan-laptop $ nc -u 127.0.0.1 10000 -p 10001\nhello,\nworld\n\nryan-pc $ nc -u 127.0.0.1 10000 -p 10001\nhello,\nworld</code></pre>\n<p><span>We could replace netcat with any other\napplication interfacing with a UDP socket as long as we know its source\nport. If we don’t have a predictable source port, we could just proxy it\nthrough netcat to provide one.</span></p>\n<p><span>Through this, we can have\nbidirectional datagram communication over our overlay network using a\nlocal UDP proxy.</span></p>\n<h2>Streams</h2>\n<p><span>Datagrams are great and all, but can we\nhave a reliable ordered bytestream over our overlay?</span></p>\n<p><span>We could follow a similar approach to what\nwe did with datagrams. That is, proxy TCP connections over our overlay.\nBut this would not provide reliability; or rather this would only\nprovide reliable delivery locally to our TCP proxy. Despite emphasising\nthe lack of loss in our overlay, this was a lack of loss due to\nmobility. It doesn’t prevent loss due to congestion, link layer\nfailures, or cosmic rays…</span></p>\n<p><span>In a similar way to how our skinny\ntransport protocol emulates UDP, we could add a transport layer protocol\nemulating TCP that provides a reliable, ordered, bytestream to our\noverlay. But this is a lot of work.</span></p>\n<p><span>UDP is essentially a port wrapped around an\nIP packet for demultiplexing. What if we could treat our unreliable\ndatagram as an IP packet, and run a transport layer protocol providing a\nreliable ordered bytestream on top of it? That would solve both problems\n- provide reliable delivery and not require reinventing the\nwheel.</span></p>\n<p><span>QUIC, implemented in 2012, and defined in\n<a href=\"https://datatracker.ietf.org/doc/html/rfc9000\">RFC9000</a>, is\nthe first that springs to mind. This is a transport layer protocol\nintended to provide performant and secure HTTP connections. To get\naround various protocol ossification problems, including NAT traversal,\nQUIC runs over UDP. This works to our benefit as if we could proxy QUIC\nto send UDP packets over our overlay this would be perfect for our use\ncase.</span></p>\n<p><span>However, QUIC only exists as a <a href=\"https://github.com/quicwg/base-drafts/wiki/Implementations\">number\nof userspace implementations</a>. This has great benefits for\ndevelopment, but means we would be back to a raw userspace socket\ninterface that we couldn’t use existing programs with. We could write\nanother proxy from applications to a QUIC userspace process, but let’s\nsee if we can do better.</span></p>\n<p><span>A slightly older protocol Stream Control\nTransmission Protocol (SCTP), defined in <a href=\"https://datatracker.ietf.org/doc/html/rfc4960\">RFC4960</a>, is a\nbetter solution. SCTP is a stream based transport layer protocol with\nsome benefits over TCP, like multistreaming. It’s worth noting that\nthere are a lot of parallels between what SCTP and ILNP provide, like\nmobility and multihoming, just implemented at different layers of the\nnetwork stack.</span></p>\n<p><span>But what we really care about is defined in\n<a href=\"https://datatracker.ietf.org/doc/html/rfc6951\">RFC6951</a>.\nThis extension to SCTP provides an option to encapsulate SCTP packets in\nUDP packets instead of IP packets. The main purpose of this extension is\nto allow SCTP packets to traverse ‘legacy’ NAT - the same reason QUIC\nuses UDP - but it also means we can proxy SCTP encapsulated in UDP over\nour overlay!</span></p>\n<p><span>There is a <a href=\"https://github.com/sctplab/usrsctp\">userspace implementation of\nSCTP</a>, but it only provides a userspace socket interface in C++.\nFortunately the Linux kernel has\n<a href=\"https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/net/sctp/?id=v5.11&id2=v5.10\">implemented</a>\nRFC6951 in <a href=\"https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.11\">version\n5.11</a>, released February 2021, and the nmap suite have included\nsupport for SCTP in their ncat utility (a spiritual successor to\nnetcat).</span></p>\n<p><span>Note that only the end hosts require SCTP\nsupport, so the fact that <code>hp-laptop</code> is running Ubuntu using\nan older kernel is not an issue.</span></p>\n<p><span>SCTP UDP encapulsation uses a\n<code>udp_port</code> and <code>encap_port</code>. From the <a href=\"https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html\">sysctl\nkernel documentation</a>:</span></p>\n<pre><code>udp_port - INTEGER\n\nThe listening port for the local UDP tunnelling sock. Normally it’s using the IANA-assigned UDP port number 9899 (sctp-tunneling).\n\nThis UDP sock is used for processing the incoming UDP-encapsulated SCTP packets (from RFC6951), and shared by all applications in the same net namespace.\n\nThis UDP sock will be closed when the value is set to 0.\n\nThe value will also be used to set the src port of the UDP header for the outgoing UDP-encapsulated SCTP packets. For the dest port, please refer to ‘encap_port’ below.</code></pre>\n<p><span>encap_port - INTEGER</span></p>\n<pre><code>The default remote UDP encapsulation port.\n\nThis value is used to set the dest port of the UDP header for the outgoing UDP-encapsulated SCTP packets by default. Users can also change the value for each sock/asoc/transport by using setsockopt. For further information, please refer to RFC6951.\n\nNote that when connecting to a remote server, the client should set this to the port that the UDP tunneling sock on the peer server is listening to and the local UDP tunneling sock on the client also must be started. On the server, it would get the encap_port from the incoming packet’s source port.</code></pre>\n<p><span>As we want to intercept the SCTP UDP\npackets for proxying over our overlay, we won’t use the IANA-assigned\n9899 port for these variables. Instead, we’ll use ncat to intercept\noutgoing SCTP UDP packets (sent to <code>udp_port</code>) proxying them\nover our overlay, and to forward received SCTP UDP packets to\n<code>encap_port</code>, where the kernel SCTP implementation will be\nlistening. It’s worth noting that this will likely break any other\napplications using SCTP.</span></p>\n<h2>Putting it all together</h2>\n<p><span>On both\n<code>ryan-laptop</code> and <code>ryan-pc</code> we configure the\nkernel SCTP implementation’s listening port and outgoing destination\nport:</span></p>\n<pre><code># UDP listening port\n$ sudo sysctl -w net.sctp.encap_port=10002\n# UDP dest port\n$ sudo sysctl -w net.sctp.udp_port=10003</code></pre>\n<p><span>To redirect outgoing SCTP\nUDP packets over the overlay we’ll redirect packets destined for port\n10002 to the overlay with source port 10002:</span></p>\n<pre><code>$ ncat -u -l 10002 -c "ncat -u 127.0.0.1 10001 -p 10002" --keep-open</code></pre>\n<p><span>Proxy mappings redirecting\npackets from local port <code>encap_port</code> to remote port\n<code>udp_port</code>:</span></p>\n<pre><code>ryan-pc: % python proxy_create.py 10002 alice 10003\nryan-laptop: % python proxy_create.py 10002 bob 10003</code></pre>\n<p><span>And as control messages\nwill be exchanged between the two SCTP instances we’ll also require\nredirecting packets from local port <code>encap_port</code> to remote\nport <code>encap_port</code>.</span></p>\n<pre><code>ryan-pc: % python proxy_create.py 10003 alice 10003\nryan-laptop: % python proxy_create.py 10003 bob 10003</code></pre>\n<p><span>Now we can run ncat with\nSCTP :-)</span></p>\n<pre><code>ryan-laptop $ ncat --sctp -l 9999\nhello,\nworld\n\nryan-pc $ ncat --sctp 127.0.0.1 9999\nhello,\nworld</code></pre>\n<p><span>But this <em>still</em>\ndoesn’t allow us to use existing applications using a standard TCP\nsocket over our overlay. For this, we turn to\n<code>ssh</code>.</span></p>\n<p><span>On both end points we can\nrun:</span></p>\n<pre><code>$ ncat --sctp -l 9999 -c "ncat 127.0.0.1 22" --keep-open</code></pre>\n<p><span>Which will use ncat to send\nsctp data to port 22, used for ssh.</span></p>\n<p><span>With an openssh server\nconfigured on the machine we can then use:</span></p>\n<pre><code>$ ssh -o "ProxyCommand ncat --sctp 127.0.0.1 9999" -N -D 8080 localhost</code></pre>\n<p><span>To connect via ssh over our\noverlay.</span></p>\n<p><span>And if we have ssh… we\nhave anything!</span></p>\n<p><span>That is, we can create a\nSOCKS proxy to send anything over our overlay. For example, we can\ncreate a proxy:</span></p>\n<pre><code>$ ssh -o "ProxyCommand ncat --sctp 127.0.0.1 9999" -N -D 8080 localhost</code></pre>\n<p><span>And then configure your\nweb browser of choice to use this proxy.</span></p>\n<p><span>Alternatively, one could\nalso proxy a raw TCP connection on port <code>PORT</code> over SCTP and\nour overlay with:</span></p>\n<pre><code>$ ncat -l PORT -c "ncat --sctp 127.0.0.1 9999" --keep-open</code></pre>\n<h2>Taking a step back</h2>\n<p><span>Putting all the pieces together,\nthe network stack looks something like:</span></p>\n<p><img src=\"./images/bin.jpg\"></p>\n<p><span>Just kidding. But not really.\nAll these proxies and overlays obviously have performance\nimplications.</span></p>\n<p><span>As David Wheeler said, “All\nproblems in computer science can be solved by another level of\nindirection, except for the problem of too many layers of\nindirection.”</span></p>\n<p><span>But hey, it works!</span></p>\n<p><span>Here’s the actual network stack\na SOCKS proxy over our overlay:</span></p>\n<p><img src=\"./images/ilnp-network-stack.svg\"></p>\n<p><span>The various proxying and\nmappings are not depicted.</span></p>\n<h2>Further Reading</h2>\n<p><span>Some interesting reads that are\nrelated and tangentially related, respectively, to this\nproject.</span></p>\n<ul>\n<li>On QUIC and SCTP: <a href=\"https://lwn.net/Articles/745590/\">https://lwn.net/Articles/745590/</a>\n</li>\n<li>On NAT traversal: <a href=\"https://tailscale.com/blog/how-nat-traversal-works/\">https://tailscale.com/blog/how-nat-traversal-works/</a></li>\n</ul>",
9 "content_type": "html",
10 "categories": [],
11 "source": "https://ryan.freumh.org/atom.xml"
12}