kcp-go is a Reliable-UDP library for golang.
This library provides smooth, resilient, ordered, error-checked, and anonymous stream delivery over UDP packets. Battle-tested with the open-source project kcptun, millions of devices—from low-end MIPS routers to high-end servers—have deployed kcp-go-powered programs across various applications, including online games, live broadcasting, file synchronization, and network acceleration.
- Designed for latency-sensitive scenarios.
- Cache-friendly and memory-optimized design, offering an extremely high-performance core.
- Handles >5K concurrent connections on a single commodity server.
- Compatible with net.Conn and net.Listener, serving as a drop-in replacement for net.TCPConn.
- FEC (Forward Error Correction) support using Reed-Solomon Codes.
- Packet-level encryption support for AES, TEA, 3DES, Blowfish, Cast5, Salsa20, etc., in CFB mode, generating completely anonymous packets.
- AEAD packet encryption support.
- Only a fixed number of goroutines are created for the entire server application, with context switching costs between goroutines taken into consideration.
- Compatible with skywind3000's C version, with various improvements.
- Platform-specific optimizations: sendmmsg and recvmmsg for Linux.
For complete documentation, see the associated Godoc.
kcp.flush() loops through the send queue for retransmission checking every 20 ms.
I wrote a benchmark comparing sequential loops through a slice and a container/list here:
BenchmarkLoopSlice-4 2000000000 0.39 ns/op
BenchmarkLoopList-4 100000000 54.6 ns/op
The list structure introduces heavy cache misses compared to the slice, which offers better locality. For 5,000 connections with a 32-window size and a 20 ms interval, using a slice costs 6 μs (0.03% CPU) per kcp.flush(), whereas using a list costs 8.7 ms (43.5% CPU).
Timing is critical for the RTT estimator. Inaccurate timing leads to false retransmissions in KCP, but calling time.Now() costs 42 cycles (10.5 ns on a 4 GHz CPU, 15.6 ns on my MacBook Pro 2.7 GHz).
The benchmark for time.Now() is here:
BenchmarkNow-4 100000000 15.6 ns/op
In kcp-go, after each kcp.output() function call, the current clock time is updated upon return. For a single kcp.flush() operation, the current time is queried from the system once. For 5,000 connections, this costs 5000 × 15.6 ns = 78 μs (a fixed cost when no packets need to be sent). For 10 MB/s data transfer with a 1400 MTU, kcp.output() is called approximately 7,500 times, costing 117 μs for time.Now() per second.
Primary memory allocation is performed from a global buffer pool, xmit.Buf. In kcp-go, when bytes need to be allocated, they are obtained from this pool, which returns a fixed-capacity 1500 bytes (mtuLimit). The rx queue, tx queue, and FEC queue all receive bytes from this pool and return them after use to prevent unnecessary zeroing of bytes. The pool mechanism maintains a high watermark for slice objects, allowing these in-flight objects to survive periodic garbage collection while also being able to return memory to the runtime when idle.
kcp-go ships with built-in packet encryption powered by various block encryption algorithms and operates in Cipher Feedback Mode. For each packet to be sent, the encryption process begins by encrypting a nonce from the system entropy, ensuring that encryption of the same plaintext never produces the same ciphertext.
The contents of packets are completely anonymous with encryption, including the headers (FEC, KCP), checksums, and payload. Note that regardless of which encryption method you choose at the upper layer, if you disable encryption, the transmission will be insecure because the header is plaintext and susceptible to tampering, such as jamming the sliding window size, round-trip time, FEC properties, and checksums. AES-128 is recommended for minimal encryption, as modern CPUs feature AES-NI instructions and perform better than salsa20 (see the table above).
Other possible attacks on kcp-go include:
- Traffic analysis: Data flow on specific websites may exhibit patterns during data exchange. This type of eavesdropping has been mitigated by adopting smux to mix data streams and introduce noise. While a perfect solution has not yet emerged, theoretically, shuffling/mixing messages on a larger-scale network may mitigate this problem.
- Replay attack: Since asymmetric encryption has not been introduced into kcp-go, capturing packets and replaying them on a different machine is possible. Note that hijacking the session and decrypting the contents is still impossible. Upper layers should use an asymmetric encryption system to guarantee the authenticity of each message (to process each message exactly once), such as HTTPS/OpenSSL/LibreSSL. Signing requests with private keys can eliminate this type of attack.
- Immediate FastACK: Send immediately after
fastackis triggered, without waiting for the fixedinterval. - Immediate ACK: Send immediately after accumulating ACKs that fill a full MTU packet, also without waiting for the
interval. In high-speed networks, this acts as a higher-frequency "clock signal," potentially boosting unidirectional transmission speed by approximately 6x. For instance, if a batch takes only 1.5ms to process on a high-speed link but still adheres to a fixed 10ms transmission cycle, the actual throughput would be limited to 1/6 of the potential. - Pacing Mechanism: Introduced a pacing clock to prevent burst congestion where data piles up in the kernel when
snd_wndis large, causing the kernel to drop packets. While difficult to implement in user space, a usable version has been achieved, allowing user-space echo to stabilize above 100MB/s. - Data Structure Optimization: Optimized data structures (e.g.,
snd_bufringbuffer) to ensure good cache coherency. Queues must not be too long; otherwise, traversal costs introduce extra latency. In high-speed networks, the buffer corresponding to BDP should be kept smaller to minimize latency from data structures. Note that the current KCP structure has O(n) complexity for RTO; changing it to O(1) would require significant refactoring.
Ultimately, nothing is more critical in a transmission system than the clock (real-time performance).
- Reed-Solomon based encoder/decoder lives in the
postProcess/packetInputpath, so parity shards are generated and consumed without extra goroutines or lock contention. - Data/parity ratios are configurable per session, letting operators trade ~20–30% bandwidth overhead for lower tail latency on lossy or long-haul links.
- Parity shards are produced from buffer-pool-backed slices, which avoids repeated allocations and keeps GC pressure flat even during multi-Gbps transfers.
- Decoding favors single-pass recovery: as soon as enough shards arrive, the original packets are reconstructed and pushed into
KCP.Input, minimizing reordering and retransmission storms. - When combined with encryption, FEC headers stay protected, preventing traffic shapers from inferring recovery patterns or downgrading throughput.
NONCE:
16bytes cryptographically secure random number, nonce changes for every packet.
CRC32:
CRC-32 checksum of data using the IEEE polynomial
FEC TYPE:
typeData = 0xF1
typeParity = 0xF2
FEC SEQID:
monotonically increasing in range: [0, (0xffffffff/shardSize) * shardSize - 1]
SIZE:
The size of KCP frame plus 2
KCP Header
+------------------------------+
| conv (u32) |
+-------+-------+--------------+
| cmd | frag | wnd |
| u8 | u8 | u16 |
+------------------------------+
| ts (u32) |
+------------------------------+
| sn (u32) |
+------------------------------+
| una (u32) |
+------------------------------+
| data (bytes) |
+------------------------------+
2025/11/26 11:12:51 beginning tests, encryption:salsa20, fec:10/3
goos: linux
goarch: amd64
pkg: github.com/xtaci/kcp-go/v5
cpu: AMD Ryzen 9 5950X 16-Core Processor
BenchmarkSM4
BenchmarkSM4-32 56077 21672 ns/op 138.43 MB/s 0 B/op 0 allocs/op
BenchmarkAES128
BenchmarkAES128-32 525854 2228 ns/op 1346.69 MB/s 0 B/op 0 allocs/op
BenchmarkAES192
BenchmarkAES192-32 473692 2429 ns/op 1234.95 MB/s 0 B/op 0 allocs/op
BenchmarkAES256
BenchmarkAES256-32 427497 2725 ns/op 1101.06 MB/s 0 B/op 0 allocs/op
BenchmarkTEA
BenchmarkTEA-32 149976 8085 ns/op 371.06 MB/s 0 B/op 0 allocs/op
BenchmarkXOR
BenchmarkXOR-32 12333190 92.35 ns/op 32485.16 MB/s 0 B/op 0 allocs/op
BenchmarkBlowfish
BenchmarkBlowfish-32 70762 16983 ns/op 176.65 MB/s 0 B/op 0 allocs/op
BenchmarkNone
BenchmarkNone-32 47325206 24.49 ns/op 122482.39 MB/s 0 B/op 0 allocs/op
BenchmarkCast5
BenchmarkCast5-32 66837 18035 ns/op 166.35 MB/s 0 B/op 0 allocs/op
Benchmark3DES
Benchmark3DES-32 18402 64349 ns/op 46.62 MB/s 0 B/op 0 allocs/op
BenchmarkTwofish
BenchmarkTwofish-32 56440 21380 ns/op 140.32 MB/s 0 B/op 0 allocs/op
BenchmarkXTEA
BenchmarkXTEA-32 45616 26124 ns/op 114.84 MB/s 0 B/op 0 allocs/op
BenchmarkSalsa20
BenchmarkSalsa20-32 525685 2199 ns/op 1363.97 MB/s 0 B/op 0 allocs/op
BenchmarkCRC32
BenchmarkCRC32-32 19418395 59.05 ns/op 17341.83 MB/s
BenchmarkCsprngSystem
BenchmarkCsprngSystem-32 2912889 404.3 ns/op 39.58 MB/s
BenchmarkCsprngMD5
BenchmarkCsprngMD5-32 15063580 79.23 ns/op 201.95 MB/s
BenchmarkCsprngSHA1
BenchmarkCsprngSHA1-32 20186407 60.04 ns/op 333.08 MB/s
BenchmarkCsprngNonceMD5
BenchmarkCsprngNonceMD5-32 13863704 85.11 ns/op 187.98 MB/s
BenchmarkCsprngNonceAES128
BenchmarkCsprngNonceAES128-32 97239751 12.56 ns/op 1274.09 MB/s
BenchmarkFECDecode
BenchmarkFECDecode-32 1808791 679.1 ns/op 2208.94 MB/s 1641 B/op 3 allocs/op
BenchmarkFECEncode
BenchmarkFECEncode-32 6671982 181.4 ns/op 8270.76 MB/s 2 B/op 0 allocs/op
BenchmarkFlush
BenchmarkFlush-32 322982 3809 ns/op 0 B/op 0 allocs/op
BenchmarkDebugLog
BenchmarkDebugLog-32 1000000000 0.2146 ns/op
BenchmarkEchoSpeed4K
BenchmarkEchoSpeed4K-32 35583 32875 ns/op 124.59 MB/s 18223 B/op 148 allocs/op
BenchmarkEchoSpeed64K
BenchmarkEchoSpeed64K-32 1995 510301 ns/op 128.43 MB/s 284233 B/op 2297 allocs/op
BenchmarkEchoSpeed512K
BenchmarkEchoSpeed512K-32 259 4058131 ns/op 129.19 MB/s 2243058 B/op 18148 allocs/op
BenchmarkEchoSpeed1M
BenchmarkEchoSpeed1M-32 145 8561996 ns/op 122.47 MB/s 4464227 B/op 36009 allocs/op
BenchmarkSinkSpeed4K
BenchmarkSinkSpeed4K-32 194648 42136 ns/op 97.21 MB/s 2073 B/op 50 allocs/op
BenchmarkSinkSpeed64K
BenchmarkSinkSpeed64K-32 10000 113038 ns/op 579.77 MB/s 29242 B/op 741 allocs/op
BenchmarkSinkSpeed256K
BenchmarkSinkSpeed256K-32 1555 843724 ns/op 621.40 MB/s 229558 B/op 5850 allocs/op
BenchmarkSinkSpeed1M
BenchmarkSinkSpeed1M-32 667 1783214 ns/op 588.03 MB/s 462691 B/op 11694 allocs/op
PASS
ok github.com/xtaci/kcp-go/v5 49.978s
===
Model Name: MacBook Pro
Model Identifier: MacBookPro14,1
Processor Name: Intel Core i5
Processor Speed: 3.1 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache (per Core): 256 KB
L3 Cache: 4 MB
Memory: 8 GB
===
$ go test -v -run=^$ -bench .
beginning tests, encryption:salsa20, fec:10/3
goos: darwin
goarch: amd64
pkg: github.com/xtaci/kcp-go
BenchmarkSM4-4 50000 32180 ns/op 93.23 MB/s 0 B/op 0 allocs/op
BenchmarkAES128-4 500000 3285 ns/op 913.21 MB/s 0 B/op 0 allocs/op
BenchmarkAES192-4 300000 3623 ns/op 827.85 MB/s 0 B/op 0 allocs/op
BenchmarkAES256-4 300000 3874 ns/op 774.20 MB/s 0 B/op 0 allocs/op
BenchmarkTEA-4 100000 15384 ns/op 195.00 MB/s 0 B/op 0 allocs/op
BenchmarkXOR-4 20000000 89.9 ns/op 33372.00 MB/s 0 B/op 0 allocs/op
BenchmarkBlowfish-4 50000 26927 ns/op 111.41 MB/s 0 B/op 0 allocs/op
BenchmarkNone-4 30000000 45.7 ns/op 65597.94 MB/s 0 B/op 0 allocs/op
BenchmarkCast5-4 50000 34258 ns/op 87.57 MB/s 0 B/op 0 allocs/op
Benchmark3DES-4 10000 117149 ns/op 25.61 MB/s 0 B/op 0 allocs/op
BenchmarkTwofish-4 50000 33538 ns/op 89.45 MB/s 0 B/op 0 allocs/op
BenchmarkXTEA-4 30000 45666 ns/op 65.69 MB/s 0 B/op 0 allocs/op
BenchmarkSalsa20-4 500000 3308 ns/op 906.76 MB/s 0 B/op 0 allocs/op
BenchmarkCRC32-4 20000000 65.2 ns/op 15712.43 MB/s
BenchmarkCsprngSystem-4 1000000 1150 ns/op 13.91 MB/s
BenchmarkCsprngMD5-4 10000000 145 ns/op 110.26 MB/s
BenchmarkCsprngSHA1-4 10000000 158 ns/op 126.54 MB/s
BenchmarkCsprngNonceMD5-4 10000000 153 ns/op 104.22 MB/s
BenchmarkCsprngNonceAES128-4 100000000 19.1 ns/op 837.81 MB/s
BenchmarkFECDecode-4 1000000 1119 ns/op 1339.61 MB/s 1606 B/op 2 allocs/op
BenchmarkFECEncode-4 2000000 832 ns/op 1801.83 MB/s 17 B/op 0 allocs/op
BenchmarkFlush-4 5000000 272 ns/op 0 B/op 0 allocs/op
BenchmarkEchoSpeed4K-4 5000 259617 ns/op 15.78 MB/s 5451 B/op 149 allocs/op
BenchmarkEchoSpeed64K-4 1000 1706084 ns/op 38.41 MB/s 56002 B/op 1604 allocs/op
BenchmarkEchoSpeed512K-4 100 14345505 ns/op 36.55 MB/s 482597 B/op 13045 allocs/op
BenchmarkEchoSpeed1M-4 30 34859104 ns/op 30.08 MB/s 1143773 B/op 27186 allocs/op
BenchmarkSinkSpeed4K-4 50000 31369 ns/op 130.57 MB/s 1566 B/op 30 allocs/op
BenchmarkSinkSpeed64K-4 5000 329065 ns/op 199.16 MB/s 21529 B/op 453 allocs/op
BenchmarkSinkSpeed256K-4 500 2373354 ns/op 220.91 MB/s 166332 B/op 3554 allocs/op
BenchmarkSinkSpeed1M-4 300 5117927 ns/op 204.88 MB/s 310378 B/op 6988 allocs/op
PASS
ok github.com/xtaci/kcp-go 50.349s
=== Raspberry Pi 4 ===
➜ kcp-go git:(master) cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 108.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3
➜ kcp-go git:(master) go test -run=^$ -bench .
2020/01/05 19:25:13 beginning tests, encryption:salsa20, fec:10/3
goos: linux
goarch: arm
pkg: github.com/xtaci/kcp-go/v5
BenchmarkSM4-4 20000 86475 ns/op 34.69 MB/s 0 B/op 0 allocs/op
BenchmarkAES128-4 20000 62254 ns/op 48.19 MB/s 0 B/op 0 allocs/op
BenchmarkAES192-4 20000 71802 ns/op 41.78 MB/s 0 B/op 0 allocs/op
BenchmarkAES256-4 20000 80570 ns/op 37.23 MB/s 0 B/op 0 allocs/op
BenchmarkTEA-4 50000 37343 ns/op 80.34 MB/s 0 B/op 0 allocs/op
BenchmarkXOR-4 100000 22266 ns/op 134.73 MB/s 0 B/op 0 allocs/op
BenchmarkBlowfish-4 20000 66123 ns/op 45.37 MB/s 0 B/op 0 allocs/op
BenchmarkNone-4 3000000 518 ns/op 5786.77 MB/s 0 B/op 0 allocs/op
BenchmarkCast5-4 20000 76705 ns/op 39.11 MB/s 0 B/op 0 allocs/op
Benchmark3DES-4 5000 418868 ns/op 7.16 MB/s 0 B/op 0 allocs/op
BenchmarkTwofish-4 5000 326896 ns/op 9.18 MB/s 0 B/op 0 allocs/op
BenchmarkXTEA-4 10000 114418 ns/op 26.22 MB/s 0 B/op 0 allocs/op
BenchmarkSalsa20-4 50000 36736 ns/op 81.66 MB/s 0 B/op 0 allocs/op
BenchmarkCRC32-4 1000000 1735 ns/op 589.98 MB/s
BenchmarkCsprngSystem-4 1000000 2179 ns/op 7.34 MB/s
BenchmarkCsprngMD5-4 2000000 811 ns/op 19.71 MB/s
BenchmarkCsprngSHA1-4 2000000 862 ns/op 23.19 MB/s
BenchmarkCsprngNonceMD5-4 2000000 878 ns/op 18.22 MB/s
BenchmarkCsprngNonceAES128-4 5000000 326 ns/op 48.97 MB/s
BenchmarkFECDecode-4 200000 9081 ns/op 165.16 MB/s 140 B/op 1 allocs/op
BenchmarkFECEncode-4 100000 12039 ns/op 124.59 MB/s 11 B/op 0 allocs/op
BenchmarkFlush-4 100000 21704 ns/op 0 B/op 0 allocs/op
BenchmarkEchoSpeed4K-4 2000 981182 ns/op 4.17 MB/s 12384 B/op 424 allocs/op
BenchmarkEchoSpeed64K-4 100 10503324 ns/op 6.24 MB/s 123616 B/op 3779 allocs/op
BenchmarkEchoSpeed512K-4 20 138633802 ns/op 3.78 MB/s 1606584 B/op 29233 allocs/op
BenchmarkEchoSpeed1M-4 5 372903568 ns/op 2.81 MB/s 4080504 B/op 63600 allocs/op
BenchmarkSinkSpeed4K-4 10000 121239 ns/op 33.78 MB/s 4647 B/op 104 allocs/op
BenchmarkSinkSpeed64K-4 1000 1587906 ns/op 41.27 MB/s 50914 B/op 1115 allocs/op
BenchmarkSinkSpeed256K-4 100 16277830 ns/op 32.21 MB/s 453027 B/op 9296 allocs/op
BenchmarkSinkSpeed1M-4 100 31040703 ns/op 33.78 MB/s 898097 B/op 18932 allocs/op
PASS
ok github.com/xtaci/kcp-go/v5 64.151s
Control messages like SYN/FIN/RST in TCP are not defined in KCP. You need a keepalive/heartbeat mechanism at the application level. A practical example is to use a multiplexing protocol over the session, such as smux (which has an embedded keepalive mechanism). See kcptun for a reference implementation.
Q: I'm handling >5K connections on my server, and the CPU utilization is very high.
A: A standalone agent or gate server for running kcp-go is recommended, not only to reduce CPU utilization but also to improve the precision of RTT measurements (timing), which indirectly affects retransmission. Increasing the update interval with SetNoDelay, such as conn.SetNoDelay(1, 40, 1, 1), will dramatically reduce system load but may lower performance.
Q: When should I enable FEC?
A: Forward error correction is critical for long-distance transmission because packet loss incurs a significant time penalty. In the complex packet routing networks of the modern world, round-trip time-based loss checks are not always efficient. The significant deviation of RTT samples over long distances typically leads to a larger RTO value in typical RTT estimators, which slows down transmission.
Q: Should I enable encryption?
A: Yes, for the security of the protocol, even if the upper layer has encryption.
- https://github.com/xtaci/kcptun -- A Secure Tunnel Based on KCP over UDP.
- https://github.com/getlantern/lantern -- Lantern delivers fast access to the open Internet.
- https://github.com/smallnest/rpcx -- An RPC service framework based on net/rpc, similar to Alibaba Dubbo and Weibo Motan.
- https://github.com/gonet2/agent -- A gateway for games with stream multiplexing.
- https://github.com/syncthing/syncthing -- Open Source Continuous File Synchronization.
- https://github.com/xtaci/libkcp -- FEC enhanced KCP session library for iOS/Android in C++
- https://github.com/xtaci/smux/ -- A Stream Multiplexing Library for golang with least memory
- https://github.com/xtaci/libkcp -- FEC enhanced KCP session library for iOS/Android in C++
- https://github.com/skywind3000/kcp -- A Fast and Reliable ARQ Protocol
- https://github.com/klauspost/reedsolomon -- Reed-Solomon Erasure Coding in Go