4.4 KiB
TCP/TLS Data Plane (TURN over TCP/TLS) in niom-turn
This document explains why a TCP/TLS data plane matters for TURN and how niom-turn currently implements it.
Why is this needed?
TURN has two kinds of traffic:
- Control plane: STUN/TURN requests/responses (Allocate, CreatePermission, ChannelBind, Refresh, …)
- Data plane: application data between client and peer flowing through the relay (Send/Data Indication or ChannelData)
In many real-world networks, UDP is restricted or fully blocked (corporate Wi‑Fi, mobile APNs, captive portals, proxy environments). WebRTC/ICE typically tries:
- UDP (fast, preferred)
- TURN over TCP
- TURN over TLS ("turns:") as the last, but often working option
For TURN over TCP/TLS to be usable, not only the control plane must run over the stream, but also the return path peer → client (data plane) must arrive over the same TCP/TLS connection.
What does niom-turn implement?
- Client ↔ server transport can be UDP, TCP, or TLS.
- The relay to the peer remains UDP (classic TURN UDP relay).
- For TCP/TLS, the server delivers the data-plane return path back to the client over the stream.
This matches the common WebRTC fallback: “client↔server over TCP/TLS, peer transport over UDP”.
Architecture in code
- Stream-Handler (TCP/TLS): src/turn_stream.rs
- TCP Listener: src/tcp.rs
- TLS Listener: src/tls.rs
- Allocation + Relay: src/alloc.rs
Key idea: ClientSink
So the relay loop can send peer packets back over different client transports, src/alloc.rs uses an abstraction:
ClientSink::Udp { sock, addr }→ sends peer data viaudp.send_to(..., addr)ClientSink::Stream { tx }→ queues bytes into a writer task that writes onto the TCP/TLS stream
When a client allocates over TCP/TLS, the allocation is created with a ClientSink::Stream.
Framing: STUN vs. ChannelData on a byte stream
On UDP you receive datagrams; on TCP/TLS you receive a continuous byte stream. TURN over TCP/TLS multiplexes:
- STUN/TURN messages (control plane)
- ChannelData frames (data plane, client → server)
niom-turn therefore parses the stream in a loop as “next frame” (see try_pop_next_frame(...) in src/turn_stream.rs):
STUN Message
- Header is 20 bytes
- The Length field is the body length
- Total length:
20 + length
ChannelData Frame
- 4-byte header:
CHANNEL-NUMBER(2) +LENGTH(2) - Channel numbers are in the range
0x4000..=0x7FFF(top bits01) - Total length:
4 + length
Important: for TCP/TLS, no padding may remain as “extra bytes” in the stream. Therefore niom-turn builds ChannelData as exactly 4 + len bytes (see src/stun.rs).
Hardening: resync & limits
Because TCP/TLS is a byte stream, broken or malicious clients can otherwise easily “desynchronise” the parser.
niom-turn therefore implements in the stream parser:
- Magic cookie check for STUN: invalid cookies trigger byte-wise resync (instead of waiting for huge lengths).
- Frame size limits (STUN body and ChannelData) to limit memory/DoS risk.
- Max buffer limit per connection: if the input buffer grows too large, the connection is closed.
Data flow (TCP/TLS)
- Client connects over TCP or TLS.
- The stream handler reads frames:
- STUN/TURN requests → processed like the UDP path (auth, allocation, permission, channel bind, send, refresh)
- ChannelData (client→peer) → forwarded to the peer via the UDP relay
- Peer sends UDP to the relay address.
- The relay loop forwards bytes to the
ClientSink:- for streams:
tx.send(bytes)→ a writer task writes Data Indication or ChannelData back onto the same stream
- for streams:
Limitations / not implemented
- No TCP relay to the peer (TURN TCP allocations / CONNECT methods like RFC6062).
- Focus is: client↔server transport over TCP/TLS + UDP relay.
- Full IPv6 operational coverage is not the focus of the MVP.
Tests
- TCP Stream Data-Plane:
tests/tcp_turn.rs - TLS Stream Data-Plane:
tests/tls_data_plane.rs - Gemeinsames Framing (STUN + ChannelData):
tests/support/stream.rs
Wenn du willst, kann ich als nächsten Schritt die Doku um eine kurze Interop-Checkliste (WebRTC/ICE Verhalten, Candidate-Types, typische Fehlerbilder) ergänzen.