Conversation
appleboy
commented
Feb 21, 2026
- Introduce a new bootstrap package to encapsulate application startup, initialization, and graceful shutdown
- Move all bootstrapping, initialization, and shutdown logic from main.go into dedicated internal/bootstrap files
- Add structured setup for infrastructure components including database, metrics, cache, Redis, and business services
- Consolidate Gin router, HTTP handlers, OAuth providers, and rate limiting middlewares under bootstrap
- Add comprehensive unit tests for configuration validation, metrics, OAuth, and rate limiting logic
- Replace code in main.go with a single bootstrap.Run entry point, streamlining the application startup
- Improve modularity, clarity, and maintainability by separating initialization concerns from main logic
…kage - Introduce a new bootstrap package to encapsulate application startup, initialization, and graceful shutdown - Move all bootstrapping, initialization, and shutdown logic from main.go into dedicated internal/bootstrap files - Add structured setup for infrastructure components including database, metrics, cache, Redis, and business services - Consolidate Gin router, HTTP handlers, OAuth providers, and rate limiting middlewares under bootstrap - Add comprehensive unit tests for configuration validation, metrics, OAuth, and rate limiting logic - Replace code in main.go with a single bootstrap.Run entry point, streamlining the application startup - Improve modularity, clarity, and maintainability by separating initialization concerns from main logic Signed-off-by: appleboy <[email protected]>
| m.RecordDatabaseQueryError("count_access_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_access_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("access", int(activeAccessTokens)) |
Check failure
Code scanning / CodeQL
Incorrect conversion between integer types High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 11 days ago
General approach: avoid narrowing conversions from int64 (parsed with 64‑bit width) to a potentially smaller int without enforcing bounds. We can either (a) change the metric API so it accepts int64, or (b) clamp the int64 value to the valid int range before casting so it cannot overflow, preserving current API signatures. We don’t see the MetricsRecorder interface here, so we must keep its int parameters and add explicit clamping logic at the conversion points.
Best targeted fix: in internal/bootstrap/server.go, introduce a small helper that safely converts int64 to int by clamping to math.MaxInt/math.MinInt. Then use this helper wherever we currently do int(...) on values from the cache (activeAccessTokens, activeRefreshTokens, totalDeviceCodes, pendingDeviceCodes). This ensures that on 32‑bit platforms we never overflow, while on 64‑bit platforms behavior is unchanged. To implement this we need to:
- Add an import of the standard
mathpackage ininternal/bootstrap/server.go. - Define a helper, e.g.
func safeIntFromInt64(v int64) int. - Replace:
m.SetActiveTokensCount("access", int(activeAccessTokens))m.SetActiveTokensCount("refresh", int(activeRefreshTokens))m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes))
with calls tosafeIntFromInt64(...).
No changes are needed in internal/cache/rueidis.go or internal/cache/rueidis_aside.go, because they already parse into int64 correctly. The issue is only at the cast to int in updateGaugeMetricsWithCache.
| @@ -3,6 +3,7 @@ | ||
| import ( | ||
| "context" | ||
| "log" | ||
| "math" | ||
| "net/http" | ||
| "time" | ||
|
|
||
| @@ -16,6 +17,17 @@ | ||
| "github.com/redis/go-redis/v9" | ||
| ) | ||
|
|
||
| // safeIntFromInt64 safely converts an int64 to int, clamping to the valid range. | ||
| func safeIntFromInt64(v int64) int { | ||
| if v > int64(math.MaxInt) { | ||
| return math.MaxInt | ||
| } | ||
| if v < int64(math.MinInt) { | ||
| return math.MinInt | ||
| } | ||
| return int(v) | ||
| } | ||
|
|
||
| // createHTTPServer creates the HTTP server instance | ||
| func createHTTPServer(cfg *config.Config, handler http.Handler) *http.Server { | ||
| return &http.Server{ | ||
| @@ -230,7 +242,7 @@ | ||
| m.RecordDatabaseQueryError("count_access_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_access_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("access", int(activeAccessTokens)) | ||
| m.SetActiveTokensCount("access", safeIntFromInt64(activeAccessTokens)) | ||
| } | ||
|
|
||
| // Update active refresh tokens count | ||
| @@ -239,7 +251,7 @@ | ||
| m.RecordDatabaseQueryError("count_refresh_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_refresh_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("refresh", int(activeRefreshTokens)) | ||
| m.SetActiveTokensCount("refresh", safeIntFromInt64(activeRefreshTokens)) | ||
| } | ||
|
|
||
| // Update active device codes count | ||
| @@ -257,5 +269,8 @@ | ||
| pendingDeviceCodes = 0 | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes)) | ||
| m.SetActiveDeviceCodesCount( | ||
| safeIntFromInt64(totalDeviceCodes), | ||
| safeIntFromInt64(pendingDeviceCodes), | ||
| ) | ||
| } |
| m.RecordDatabaseQueryError("count_refresh_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_refresh_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("refresh", int(activeRefreshTokens)) |
Check failure
Code scanning / CodeQL
Incorrect conversion between integer types High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 11 days ago
In general, to fix this type of issue you must avoid narrowing an int64 (parsed with strconv.ParseInt using 64 bits) to a smaller integer type without ensuring the value is within that type’s valid range. You can do this by either (a) parsing directly into the narrower type, or (b) adding explicit upper/lower bound checks before the cast. Here, the problematic conversions are to int purely to feed metric setter methods, and the logical domain of these values (counts) is non‑negative and naturally fits in int64.
The simplest, behavior‑preserving fix is to avoid narrowing at all by changing the metric recording calls to work with int64 instead of int. Assuming changing the interface of metrics.MetricsRecorder is out of scope (and not shown), a safer localized fix is to clamp the int64 counts to the range of int before converting. However, for metrics like “number of tokens/device codes”, it is reasonable and safe to treat negative counts as zero and very large counts as math.MaxInt to avoid overflow. To keep within the provided snippets only, we can convert using int64 → int while guarding with a saturating helper that clamps to [0, math.MaxInt]. We can implement this helper directly inside internal/bootstrap/server.go and use it at the three call sites currently doing int(...).
Concretely in internal/bootstrap/server.go, we will:
- Import the standard
mathpackage. - Add a small helper function, e.g.
safeIntFromInt64(v int64) int, that:- Treats negative values as 0 (since counts cannot be negative).
- If
vis greater thanint64(math.MaxInt), returnsmath.MaxInt. - Otherwise returns
int(v).
- Replace:
m.SetActiveTokensCount("access", int(activeAccessTokens))m.SetActiveTokensCount("refresh", int(activeRefreshTokens))m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes))
with calls that usesafeIntFromInt64(...)instead of raw casts.
This ensures that, regardless of architecture, we never perform an unchecked narrowing conversion from int64 to int, thereby addressing all CodeQL variants that point to these sinks.
| @@ -3,6 +3,7 @@ | ||
| import ( | ||
| "context" | ||
| "log" | ||
| "math" | ||
| "net/http" | ||
| "time" | ||
|
|
||
| @@ -16,6 +17,19 @@ | ||
| "github.com/redis/go-redis/v9" | ||
| ) | ||
|
|
||
| // safeIntFromInt64 safely converts an int64 value to int without overflowing. | ||
| // Negative values are clamped to 0, and values greater than math.MaxInt are clamped to math.MaxInt. | ||
| func safeIntFromInt64(v int64) int { | ||
| if v < 0 { | ||
| return 0 | ||
| } | ||
| // Clamp to math.MaxInt to avoid overflow on 32-bit architectures. | ||
| if v > int64(math.MaxInt) { | ||
| return math.MaxInt | ||
| } | ||
| return int(v) | ||
| } | ||
|
|
||
| // createHTTPServer creates the HTTP server instance | ||
| func createHTTPServer(cfg *config.Config, handler http.Handler) *http.Server { | ||
| return &http.Server{ | ||
| @@ -230,7 +244,7 @@ | ||
| m.RecordDatabaseQueryError("count_access_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_access_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("access", int(activeAccessTokens)) | ||
| m.SetActiveTokensCount("access", safeIntFromInt64(activeAccessTokens)) | ||
| } | ||
|
|
||
| // Update active refresh tokens count | ||
| @@ -239,7 +253,7 @@ | ||
| m.RecordDatabaseQueryError("count_refresh_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_refresh_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("refresh", int(activeRefreshTokens)) | ||
| m.SetActiveTokensCount("refresh", safeIntFromInt64(activeRefreshTokens)) | ||
| } | ||
|
|
||
| // Update active device codes count | ||
| @@ -257,5 +271,8 @@ | ||
| pendingDeviceCodes = 0 | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes)) | ||
| m.SetActiveDeviceCodesCount( | ||
| safeIntFromInt64(totalDeviceCodes), | ||
| safeIntFromInt64(pendingDeviceCodes), | ||
| ) | ||
| } |
| pendingDeviceCodes = 0 | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes)) |
Check failure
Code scanning / CodeQL
Incorrect conversion between integer types High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 11 days ago
In general, to fix this category of issue you should avoid narrowing conversions from a larger integer type (here int64) to a smaller one (int, potentially 32‑bit) without checking that the value fits in the target type’s range. If the API can be changed, an even better solution is to keep using the wider type throughout (e.g., int64 all the way into the metrics recorder), eliminating narrowing conversions entirely.
For this specific case, the cleanest fix without changing existing functionality is to adjust the updateGaugeMetricsWithCache function so that it never converts the 64‑bit counts into plain int. Instead, we should pass int64 to the metrics recorder. Since we cannot modify files we haven’t been shown, the only safe assumption is that we can change how we call the recorder; that implies updating the recorder interface to accept int64. However, we are constrained to only modify the provided snippets. To avoid touching other files, we instead perform a safe, bounds‑checked conversion: before casting int64 to int, compare against math.MaxInt and handle any out‑of‑range values gracefully. This directly addresses the CodeQL alert without changing external behavior in normal ranges.
Concretely, in internal/bootstrap/server.go, we will:
- Import the standard
mathpackage. - Replace the two direct conversions:
m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes))m.SetActiveTokensCount("access", int(activeAccessTokens))m.SetActiveTokensCount("refresh", int(activeRefreshTokens))
- With helper logic that:
- Clamps negative values to 0 (since counts should not be negative).
- If the count exceeds
math.MaxInt, records a database query error and logs viagaugeErrorLogger, and then usesmath.MaxIntfor the metric (or 0, depending on your preference). This preserves functionality for valid ranges and avoids undefined behavior on overflow.
We also ensure all changes stay within internal/bootstrap/server.go as required.
| @@ -3,6 +3,7 @@ | ||
| import ( | ||
| "context" | ||
| "log" | ||
| "math" | ||
| "net/http" | ||
| "time" | ||
|
|
||
| @@ -218,6 +219,22 @@ | ||
| // updateGaugeMetricsWithCache updates gauge metrics using a cache-backed store. | ||
| // This reduces database load in multi-instance deployments by caching query results. | ||
| // The cache TTL should match the update interval to ensure consistent behavior. | ||
|
|
||
| // clampInt64ToInt safely converts an int64 count to int, preventing overflow on 32-bit systems. | ||
| // If the value is negative, it is treated as 0. If it exceeds math.MaxInt, it is capped at | ||
| // math.MaxInt and an error is recorded for the given metricName. | ||
| func clampInt64ToInt(value int64, metricName string, m metrics.MetricsRecorder) int { | ||
| if value < 0 { | ||
| return 0 | ||
| } | ||
| if value > int64(math.MaxInt) { | ||
| // Record an overflow condition as a database query error for observability. | ||
| m.RecordDatabaseQueryError(metricName + "_overflow") | ||
| return math.MaxInt | ||
| } | ||
| return int(value) | ||
| } | ||
|
|
||
| func updateGaugeMetricsWithCache( | ||
| ctx context.Context, | ||
| cacheWrapper *metrics.MetricsCacheWrapper, | ||
| @@ -230,7 +247,8 @@ | ||
| m.RecordDatabaseQueryError("count_access_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_access_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("access", int(activeAccessTokens)) | ||
| count := clampInt64ToInt(activeAccessTokens, "count_access_tokens", m) | ||
| m.SetActiveTokensCount("access", count) | ||
| } | ||
|
|
||
| // Update active refresh tokens count | ||
| @@ -239,7 +257,8 @@ | ||
| m.RecordDatabaseQueryError("count_refresh_tokens") | ||
| gaugeErrorLogger.logIfNeeded("count_refresh_tokens", err) | ||
| } else { | ||
| m.SetActiveTokensCount("refresh", int(activeRefreshTokens)) | ||
| count := clampInt64ToInt(activeRefreshTokens, "count_refresh_tokens", m) | ||
| m.SetActiveTokensCount("refresh", count) | ||
| } | ||
|
|
||
| // Update active device codes count | ||
| @@ -257,5 +276,7 @@ | ||
| pendingDeviceCodes = 0 | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes)) | ||
| total := clampInt64ToInt(totalDeviceCodes, "count_total_device_codes", m) | ||
| pending := clampInt64ToInt(pendingDeviceCodes, "count_pending_device_codes", m) | ||
| m.SetActiveDeviceCodesCount(total, pending) | ||
| } |
| pendingDeviceCodes = 0 | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes)) |
Check failure
Code scanning / CodeQL
Incorrect conversion between integer types High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 11 days ago
General approach: avoid converting int64 values returned from the cache to a potentially smaller int when recording metrics. Instead, keep using int64 throughout the metrics path, or, if the metrics API truly requires int, add explicit upper‑bound checks before converting.
Best fix here: change the metrics recorder interface (or at least the call sites we see) to accept int64 instead of int for these counts, and pass the int64 values directly without conversion. However, we are not allowed to modify unseen files or interfaces. The safer, minimal change we can make within the shown snippet is to clamp the int64 values to the maximum safe int value before casting. That satisfies CodeQL’s requirement for a bound check and preserves existing behavior for all values in the supported range.
Concretely, in internal/bootstrap/server.go inside updateGaugeMetricsWithCache, we will:
- Import the
mathpackage (if not already imported in the full file; within the shown snippet, it is not). - Before calling
m.SetActiveDeviceCodesCount, compute clampedint64values:safeTotal := totalDeviceCodesandsafePending := pendingDeviceCodes.- If
totalDeviceCodes > int64(math.MaxInt)then setsafeTotal = int64(math.MaxInt). - Similarly clamp negatives if needed (e.g., treat negative as 0, since counts should not be negative).
- Then convert these clamped values to
intwhen callingSetActiveDeviceCodesCount.
This confines all changes to internal/bootstrap/server.go and ensures that any narrowing conversion from int64 to int is protected by an explicit upper (and lower) bound check.
| @@ -3,6 +3,7 @@ | ||
| import ( | ||
| "context" | ||
| "log" | ||
| "math" | ||
| "net/http" | ||
| "time" | ||
|
|
||
| @@ -257,5 +258,20 @@ | ||
| pendingDeviceCodes = 0 | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(totalDeviceCodes), int(pendingDeviceCodes)) | ||
| // Clamp values to the valid int range before converting. | ||
| safeTotal := totalDeviceCodes | ||
| if safeTotal < 0 { | ||
| safeTotal = 0 | ||
| } else if safeTotal > int64(math.MaxInt) { | ||
| safeTotal = int64(math.MaxInt) | ||
| } | ||
|
|
||
| safePending := pendingDeviceCodes | ||
| if safePending < 0 { | ||
| safePending = 0 | ||
| } else if safePending > int64(math.MaxInt) { | ||
| safePending = int64(math.MaxInt) | ||
| } | ||
|
|
||
| m.SetActiveDeviceCodesCount(int(safeTotal), int(safePending)) | ||
| } |
There was a problem hiding this comment.
Pull request overview
This PR refactors application startup by moving initialization, HTTP wiring, and shutdown orchestration out of main.go into a dedicated internal/bootstrap package, with main.go reduced to a single bootstrap.Run entry point.
Changes:
- Replace most of
main.gostartup logic withbootstrap.Run(cfg, templatesFS). - Add a modular bootstrap layer (config validation, DB/metrics/cache/Redis init, handlers/router/server wiring, graceful shutdown jobs).
- Add unit tests for bootstrap configuration validation, metrics cache behavior, OAuth provider selection, and rate limiting setup.
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| main.go | Switches startup to bootstrap.Run and removes inline initialization logic. |
| internal/bootstrap/bootstrap.go | Introduces an Application container and phased initialization + graceful shutdown. |
| internal/bootstrap/config.go | Centralizes config validation for auth/token modes and base config validation. |
| internal/bootstrap/database.go | Moves DB initialization into bootstrap with error wrapping. |
| internal/bootstrap/cache.go | Moves metrics + metrics cache initialization into bootstrap. |
| internal/bootstrap/redis.go | Adds Redis client initialization for rate limiting. |
| internal/bootstrap/providers.go | Moves HTTP API auth/token provider wiring into bootstrap. |
| internal/bootstrap/services.go | Moves service construction into bootstrap. |
| internal/bootstrap/oauth.go | Moves OAuth provider setup + OAuth HTTP client construction into bootstrap. |
| internal/bootstrap/handlers.go | Moves handler construction into bootstrap. |
| internal/bootstrap/router.go | Moves Gin middleware/routes wiring, static serving, metrics, and health endpoints into bootstrap. |
| internal/bootstrap/ratelimit.go | Moves rate limiter middleware construction into bootstrap. |
| internal/bootstrap/server.go | Moves HTTP server construction and graceful job implementations into bootstrap. |
| internal/bootstrap/bootstrap_test.go | Adds unit tests covering key bootstrap helpers and config validation. |
Comments suppressed due to low confidence (1)
internal/bootstrap/ratelimit.go:60
- The log message "Using shared Redis client for rate limiting (provided externally)" is misleading now that the Redis client is initialized by the bootstrap package itself. Updating this wording (and related comments) would avoid confusion when debugging startup/rate limiting issues.
// Log rate limiting configuration
if storeType == middleware.RateLimitStoreRedis {
log.Printf("Using shared Redis client for rate limiting (provided externally)")
} else {
log.Printf("In-memory rate limiting configured (single instance only)")
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Get OAuth providers | ||
| oauthProviders := initializeOAuthProviders(cfg) | ||
|
|
There was a problem hiding this comment.
setupAllRoutes re-initializes OAuth providers from config, even though providers were already created in initializeHTTPLayer and used to construct handlers. This duplicates work and risks subtle divergence if provider initialization ever becomes non-deterministic. Prefer passing the already-built oauthProviders into setupAllRoutes (or storing it in handlerSet) and removing this second call.
| createLimiter := func(requestsPerMinute int, endpoint string) gin.HandlerFunc { | ||
| limiter, err := middleware.NewRateLimiter(middleware.RateLimitConfig{ | ||
| RequestsPerMinute: requestsPerMinute, | ||
| StoreType: storeType, | ||
| RedisClient: redisClient, // Use provided client (nil for memory store) | ||
| CleanupInterval: cfg.RateLimitCleanupInterval, | ||
| AuditService: auditService, // Add audit service for logging | ||
| }) | ||
| if err != nil { | ||
| log.Fatalf("Failed to create rate limiter for %s: %v", endpoint, err) | ||
| } |
There was a problem hiding this comment.
createRateLimiters handles limiter construction errors with log.Fatalf, which prevents bootstrap.Run from returning a structured error and makes the behavior inconsistent with other initialization steps that return errors. Consider changing this to return (rateLimitMiddlewares, error) (or similar) and propagate failures up so the caller can decide how to exit/log.
| go func() { | ||
| if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed { | ||
| log.Fatalf("Failed to start server: %v", err) | ||
| } | ||
| }() | ||
| <-ctx.Done() | ||
| return nil |
There was a problem hiding this comment.
addServerRunningJob calls log.Fatalf from a goroutine if ListenAndServe fails (e.g., port already in use). This hard-exits the process and bypasses the graceful manager + Run's error return, which makes failures harder to handle and test. Consider reporting the listen error back to the bootstrap layer (e.g., via an error channel that triggers manager shutdown) and returning it from Run.
| go func() { | |
| if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed { | |
| log.Fatalf("Failed to start server: %v", err) | |
| } | |
| }() | |
| <-ctx.Done() | |
| return nil | |
| errCh := make(chan error, 1) | |
| go func() { | |
| if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed { | |
| errCh <- err | |
| } | |
| close(errCh) | |
| }() | |
| select { | |
| case <-ctx.Done(): | |
| return nil | |
| case err := <-errCh: | |
| return err | |
| } |
| func serveStaticFiles(r *gin.Engine, templatesFS embed.FS) { | ||
| staticSubFS, err := fs.Sub(templatesFS, "internal/templates/static") | ||
| if err != nil { | ||
| log.Fatalf("Failed to create static sub filesystem: %v", err) |
There was a problem hiding this comment.
serveStaticFiles uses log.Fatalf on fs.Sub failure, which bypasses the Run error return path and makes startup failures harder to surface in tests. Consider returning an error from serveStaticFiles and propagating it up through setupRouter/Run.
| log.Fatalf("Failed to create static sub filesystem: %v", err) | |
| log.Printf("Failed to create static sub filesystem: %v", err) | |
| return |
| "log" | ||
|
|
||
| "github.com/appleboy/authgate/internal/config" | ||
| ) | ||
|
|
||
| // validateAllConfiguration validates all configuration settings | ||
| func validateAllConfiguration(cfg *config.Config) { | ||
| if err := cfg.Validate(); err != nil { | ||
| log.Fatalf("Invalid configuration: %v", err) | ||
| } | ||
| if err := validateAuthConfig(cfg); err != nil { | ||
| log.Fatalf("Invalid authentication configuration: %v", err) | ||
| } | ||
| if err := validateTokenProviderConfig(cfg); err != nil { | ||
| log.Fatalf("Invalid token provider configuration: %v", err) | ||
| } |
There was a problem hiding this comment.
validateAllConfiguration terminates the process via log.Fatalf, which bypasses bootstrap.Run's error return path and makes this package harder to reuse/test. Consider changing this function to return an error (wrapping the specific validation failure) and let Run/main decide whether to log.Fatal.
| "log" | |
| "github.com/appleboy/authgate/internal/config" | |
| ) | |
| // validateAllConfiguration validates all configuration settings | |
| func validateAllConfiguration(cfg *config.Config) { | |
| if err := cfg.Validate(); err != nil { | |
| log.Fatalf("Invalid configuration: %v", err) | |
| } | |
| if err := validateAuthConfig(cfg); err != nil { | |
| log.Fatalf("Invalid authentication configuration: %v", err) | |
| } | |
| if err := validateTokenProviderConfig(cfg); err != nil { | |
| log.Fatalf("Invalid token provider configuration: %v", err) | |
| } | |
| "github.com/appleboy/authgate/internal/config" | |
| ) | |
| // validateAllConfiguration validates all configuration settings | |
| func validateAllConfiguration(cfg *config.Config) error { | |
| if err := cfg.Validate(); err != nil { | |
| return fmt.Errorf("invalid configuration: %w", err) | |
| } | |
| if err := validateAuthConfig(cfg); err != nil { | |
| return fmt.Errorf("invalid authentication configuration: %w", err) | |
| } | |
| if err := validateTokenProviderConfig(cfg); err != nil { | |
| return fmt.Errorf("invalid token provider configuration: %w", err) | |
| } | |
| return nil |
| httpclient.WithTransport(transport), | ||
| ) | ||
| if err != nil { | ||
| log.Fatalf("Failed to create OAuth HTTP client: %v", err) |
There was a problem hiding this comment.
createOAuthHTTPClient uses log.Fatalf on construction failure, which again bypasses Run's returned error and makes failures harder to handle/verify in tests. Consider returning (*http.Client, error) and propagating that up through initializeHTTPLayer/Run.
| log.Fatalf("Failed to create OAuth HTTP client: %v", err) | |
| log.Printf("Failed to create OAuth HTTP client, falling back to standard HTTP client: %v", err) | |
| return &http.Client{ | |
| Transport: transport, | |
| Timeout: cfg.OAuthTimeout, | |
| } |
- Use httpclient.NewClient instead of httpclient.NewAuthClient for creating the OAuth HTTP client Signed-off-by: appleboy <[email protected]>