Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: Kianmhz/GooseRelayVPN

v1.6.0

06 May 03:35

Choose a tag to compare

Highlights

Stability under sustained load

The exit server now properly closes sessions on upstream EOF and keeps partial drains queued so trailing FIN frames always reach the client. Previously, ~270 zombie sessions could accumulate in production logs (active=290 sessions while goroutines=51), eventually starving the server and triggering Apps Script HTML error pages (the root cause of "non-batch payload" reports — issue #56). Closes #31, #44, #62.

Multi-deployment-single-account configs no longer overload Apps Script

Workers and idle long-poll slots now scale by Google account bucket, not endpoint count. Users with multiple deployments under one account previously got 3 × N workers slamming a single account, hitting the per-second concurrency cap and getting HTML error pages back instead of encrypted batches. Workers are now (workersPerEndpoint + idleSlotsPerBucket - 1) × bucketCount. Closes #56, refs #41 #73.

Label your script_keys with the account field — the label is now load-bearing, not just for stats.

New idle_slots_per_bucket knob

Opt up concurrent idle long-polls per account when your accounts can sustain it. Default 1 (safe), max 3. May increase download throughput on accounts with multiple deployments. Closes #14 (downstream tuning).

Adaptive uplink coalescing

Set coalesce_step_ms (default 0, off) to collapse bursts of small TX operations into a single Apps Script call. Trades a bit of latency (20–40 ms typical) for fewer UrlFetchApp invocations on interactive workloads. Closes #14.

Version negotiation + /healthz endpoint

Clients can now query the server's version and protocol on connect; ops can health-check the VPS without holding a tunnel key. Closes #86, #87.

Per-account script stats aggregation

The periodic [stats] line now reports actual UrlFetchApp counts per Google account, so you can see which account is doing the work. Closes #55, #81.

Docker integration

Server now ships with a Dockerfile and docker-compose.yml. Closes #88.

Frame compression

Zstandard (preferred) and DEFLATE compression for batch bodies, with transparent fallback for older peers.

Performance & polish

  • HTTP/2 transport tuning: 1 MiB MaxReadFrameSize, ReadIdleTimeout, PingTimeout (#58)
  • TCP_NODELAY + TCP_QUICKACK on SOCKS listener for snappier interactive traffic (#60)
  • Buffer pooling on hot paths to reduce GC pressure (#63)
  • Unpadded raw base64 saves ~1.5% protocol overhead (#64)
  • TLS 1.3 floor (#61)
  • Dynamic queue-age fairness (#67)
  • workersPerEndpoint bumped 3 → 4

Other changes

  • carrier worker count now scales with idle_slots_per_bucket so TX pool isn't drained when raising the RX cap
  • Stats: goroutines= and clients= added to the always-on [stats] line for leak detection without pprof
  • systemd: StartLimit* moved to [Unit] with modern field names
  • diagnose: lowercase error messages, JSON doGet pre-flight handling
  • SOCKS session logs gated behind debug_timing
  • README: clarified script_keys format, troubleshooting tips for "Exec format" and "non-batch payload", Termux chmod +x step
  • Issue forms added

Full Changelog: v1.5.0...v1.6.0

Upgrade Notes

Redeploy everything together: client, server, and the Apps Script (Code.gs).

The frame wire format remains backward-compatible (raw base64 decoder still accepts padded legacy format; compression is negotiated), but several v1.6.0 features need both ends to be on v1.6.0:

  • Version negotiation + /healthz (server-side)
  • Compression (both ends must agree)
  • Per-account stats aggregation (Code.gs reports invocation counts back to client)

apps_script/Code.gs has substantive changes (70+ lines across 3 commits since v1.5.0). Redeploy the Apps Script to pick up version negotiation, the RELAY_URL rename, and per-account stats. Mixing a v1.5 Apps Script deployment with a v1.6 client is supported but you'll miss the new features.

Spread your script_keys across multiple Google accounts when you can. Workers and idle long-polls now scale by distinct account labels:

Config v1.5.0 workers v1.6.0 workers
1 deployment / 1 account 3 4
3 deployments / 1 account (unlabeled or same label) 9 4
3 deployments / 3 accounts (each labeled) 9 12
6 deployments / 3 accounts 18 18
6 deployments / 6 accounts 18 24

If your script_keys are unlabeled, the carrier treats them as one bucket and logs a startup WARN advising you to label them. Add "account": "A", "B", etc. to each entry in script_keys — the label is now load-bearing, not cosmetic.

Single-account users: try idle_slots_per_bucket: 2 (or 3, which is the cap) if your account has 2+ deployments. May increase download throughput at the cost of more simultaneous executions per account. Don't set it above what your account empirically tolerates — going too high re-triggers issue #56 (HTML error pages instead of encrypted batches).

v1.5.0

01 May 20:04

Choose a tag to compare

Highlights

Lower latency for new connections

SYN frames now bundle with the first data payload, eliminating a round-trip on every new TCP session. On the exit side, SYN dials are parallelized inside each batch so one slow upstream can no longer block the rest.

Multi-client server support

A single exit server can now safely serve multiple clients. ClientID is included in the encrypted batch and enforced server-side, so sessions from different clients are fully isolated.

Smarter front selection

At startup the client probes each configured SNI host and drops clear outliers, so a brittle Google front no longer degrades the whole tunnel.

Better behavior under download-heavy traffic

  • Adaptive backoff on idle long-polls (#41)
  • Response body cap that keeps replies under the carrier client limit (#22)

Other changes

  • ASCII art banner on client startup
  • README: direct download links for all platforms, daily quota info, full Android/Termux setup section
  • Replaced `gen-key.sh` with an inline `openssl rand -hex 32` command

Full Changelog: v1.4.1...v1.5.0

Upgrade Notes

This release changes the wire format, a 16-byte clientID is now included in the encrypted batch. You must upgrade both the client and the server together. A v1.5.0 client cannot talk to a v1.4.x server (or vice versa); the symptom of a version mismatch is the client logging relay returned non-batch payload.

v1.4.1

30 Apr 04:08

Choose a tag to compare

Full Changelog: v1.4.0...v1.4.1

v1.4.0

30 Apr 03:13

Choose a tag to compare

What's New in v1.4.0

✨ New Features

  • SOCKS5 username/password authentication (RFC 1929) — Optionally require credentials on the local SOCKS5 listener via socks_user and socks_pass in client_config.json. Useful for shared or LAN setups where you want to control who can use your tunnel.
  • Server upstream proxy (upstream_proxy) — Route all outbound connections from the VPS through a local SOCKS5 proxy such as Cloudflare WARP. Sites that block datacenter IPs see the proxy's IP instead. Set upstream_proxy in server_config.json.
  • Graceful client shutdown — On Ctrl+C the client sends RST frames for all active sessions so the server releases upstream connections immediately rather than waiting for the idle GC timer.
  • Server-side idle session GC — Upstream TCP connections orphaned by ungraceful client disconnects (killed process, network drop, sleep/wake) are now force-closed after 10 minutes, preventing slow resource exhaustion over repeated disconnect cycles.

⚡ Performance

  • download throughput improvement — Poll worker idle sleep reduced from 50 ms to 10 ms, keeping workers in the pool fast enough to sustain concurrent download polls without gaps.
  • 3 poll workers per deployment key — Each script_keys entry now spawns 3 concurrent poll workers, scaling parallelism proportionally with the number of configured endpoints.

🛡️ Stability

  • Session receive queue 64 → 1024 frames — The per-session inbox can absorb large multi-user fan-out bursts without dropping connections.
  • Graceful receive backpressure — Instead of immediately killing a session when the inbox is full (which caused mid-stream drops during brief GC pauses), the server now waits up to 5 s for the consumer to drain before giving up.
  • Drain window fix — Any non-empty client batch now triggers the short active drain window, so connection setup and teardown are no longer gated on the 8-second long-poll window.
  • Idle GC respects outbound traffic — A long pure-download session (large file, video stream) with no client→server frames is no longer incorrectly force-closed by the idle GC.

📖 Documentation & Developer Tools

  • Windows Server setup guide — Step-by-step NSSM instructions in both English and Persian READMEs for running goose-server as a Windows service.
  • End-to-end benchmark harness (bench/) — Real goose-client + goose-server loopback suite measuring throughput, TTFB, session rate, and idle CPU. Run bash bench/bench.sh to validate performance impact before opening a PR.

v1.3.0

28 Apr 04:16

Choose a tag to compare

Full Changelog: v1.2.0...v1.3.0

v1.2.0

27 Apr 16:25

Choose a tag to compare

Full Changelog: v1.1.0...v1.2.0

v1.1.0

27 Apr 02:12

Choose a tag to compare

Full Changelog: v1.0.1...v1.1.0

v1.0.1

27 Apr 01:00

Choose a tag to compare

Full Changelog: v1.0.0...v1.0.1