This document provides comprehensive security guidelines and best practices for using the cybergodev/json library in production environments.
- Overview
- Security Features
- Configuration
- Input Validation
- Path Security
- File Operations Security
- Cache Security
- Error Handling
- Best Practices
- Security Checklist
- Vulnerability Reporting
The cybergodev/json library is designed with security as a core principle. It provides multiple layers of protection against common security threats including:
- Denial of Service (DoS) attacks through resource exhaustion
- Path traversal attacks in file operations
- Injection attacks through malicious JSON paths
- Memory exhaustion through oversized JSON payloads
- Stack overflow through deeply nested JSON structures
- Information disclosure through error messages and caching
The library performs comprehensive validation on all JSON inputs:
// Automatic validation with default configuration
processor := json.New()
defer processor.Close()
// Validation includes:
// - JSON size limits (default: 10MB)
// - Nesting depth limits (default: 50 levels)
// - Object key count limits (default: 10,000 keys)
// - Array element limits (default: 10,000 elements)
// - Malicious pattern detectionBuilt-in protection against resource exhaustion:
- JSON Size Limit: Prevents processing of oversized payloads
- Nesting Depth Limit: Prevents stack overflow attacks
- Path Depth Limit: Prevents excessive path traversal
- Concurrency Limit: Prevents thread exhaustion
- Cache Size Limit: Prevents memory exhaustion
Automatic sanitization of JSON paths to prevent injection attacks:
// Paths are validated for:
// - Null bytes (\x00)
// - Excessive length (>10,000 characters)
// - Suspicious patterns (script tags, eval, etc.)
// - Path traversal attempts (../, ..\)Cache operations include security measures:
- Sensitive data detection: Automatic exclusion of sensitive data from cache
- Secure hashing: SHA-256 based cache key generation
- Cache key validation: Prevention of cache injection attacks
- TTL enforcement: Automatic expiration of cached data
The default configuration provides balanced security and performance:
config := json.DefaultConfig()
// Default security settings:
// - MaxJSONSize: 10MB
// - MaxPathDepth: 100
// - MaxNestingDepthSecurity: 50
// - MaxObjectKeys: 10,000
// - MaxArrayElements: 10,000
// - EnableValidation: trueFor high-security environments, use the HighSecurityConfig:
config := json.HighSecurityConfig()
processor := json.New(config)
defer processor.Close()
// High security settings:
// - MaxJSONSize: 5MB (more restrictive)
// - MaxPathDepth: 20 (shallow paths only)
// - MaxNestingDepthSecurity: 20 (very restrictive)
// - MaxObjectKeys: 1,000 (fewer keys)
// - MaxArrayElements: 1,000 (fewer elements)
// - StrictMode: true (strict validation)
// - EnableValidation: true (forced validation)Create a custom configuration for specific security requirements:
config := &json.Config{
// Size and depth limits
MaxJSONSize: 5 * 1024 * 1024, // 5MB
MaxPathDepth: 30, // Maximum path depth
MaxNestingDepthSecurity: 25, // Maximum nesting depth
MaxObjectKeys: 5000, // Maximum object keys
MaxArrayElements: 5000, // Maximum array elements
MaxSecurityValidationSize: 10 * 1024 * 1024, // 10MB validation limit
// Validation settings
EnableValidation: true, // Enable input validation
StrictMode: true, // Enable strict mode
ValidateInput: true, // Validate all inputs
// Rate limiting
EnableRateLimit: true, // Enable rate limiting
RateLimitPerSec: 1000, // Max operations per second
// Concurrency limits
MaxConcurrency: 20, // Maximum concurrent operations
MaxBatchSize: 100, // Maximum batch size
// Cache settings
EnableCache: true,
MaxCacheSize: 1000,
CacheTTL: 5 * time.Minute,
}
processor := json.New(config)
defer processor.Close()Check current security configuration:
limits := config.GetSecurityLimits()
fmt.Printf("Security Limits: %+v\n", limits)
// Output:
// {
// "max_nesting_depth": 50,
// "max_security_validation_size": 104857600,
// "max_object_keys": 10000,
// "max_array_elements": 10000,
// "max_json_size": 10485760,
// "max_path_depth": 100
// }All JSON inputs are automatically validated when EnableValidation is true:
processor := json.New()
defer processor.Close()
// This will be validated automatically
result, err := processor.Get(jsonString, "path.to.data")
if err != nil {
// Handle validation errors
var jsonsErr *json.JsonsError
if errors.As(err, &jsonsErr) {
if errors.Is(jsonsErr.Err, json.ErrSizeLimit) {
log.Printf("JSON too large: %v", err)
} else if errors.Is(jsonsErr.Err, json.ErrInvalidJSON) {
log.Printf("Invalid JSON: %v", err)
}
}
}Use schema validation for strict data validation:
// Define a schema
schema := &json.Schema{
Type: "object",
Properties: map[string]*json.Schema{
"username": {
Type: "string",
MinLength: 3,
MaxLength: 50,
Pattern: "^[a-zA-Z0-9_]+$",
},
"email": {
Type: "string",
Format: "email",
},
"age": {
Type: "number",
Minimum: 0,
Maximum: 150,
},
},
Required: []string{"username", "email"},
}
// Validate JSON against schema
errors, err := processor.ValidateSchema(jsonString, schema)
if err != nil {
log.Printf("Validation failed: %v", err)
}
if len(errors) > 0 {
for _, validationErr := range errors {
log.Printf("Validation error at %s: %s",
validationErr.Path, validationErr.Message)
}
}The library performs security-specific validation:
// Checks performed:
// 1. JSON size limits
// 2. Nesting depth limits
// 3. Object key count limits
// 4. Array element limits
// 5. Malicious pattern detection
// 6. Null byte detection
// 7. Excessive string length detection
// Example: Detecting deeply nested JSON
deeplyNested := `{"a":{"b":{"c":{"d":{"e":{"f":{"g":{"h":{"i":{"j":"value"}}}}}}}}}}}`
_, err := processor.Get(deeplyNested, "a.b.c.d.e.f.g.h.i.j")
// Will fail if nesting exceeds MaxNestingDepthSecurityAll JSON paths are validated for security:
// Safe paths
processor.Get(jsonString, "user.profile.name") // ✓ Safe
processor.Get(jsonString, "items[0].id") // ✓ Safe
processor.Get(jsonString, "data.users{id,name}") // ✓ Safe
// Unsafe paths (will be rejected)
processor.Get(jsonString, "path\x00injection") // ✗ Null byte
processor.Get(jsonString, strings.Repeat("a.", 5001)) // ✗ Too longSensitive information in paths is automatically sanitized:
// Paths containing sensitive keywords are redacted in error messages
processor.Get(jsonString, "user.password")
// Error message will show: [REDACTED_PATH] instead of actual path
// Sensitive patterns detected:
// - password
// - token
// - key
// - secret
// - authPrevent excessive path traversal:
config := &json.Config{
MaxPathDepth: 20, // Maximum 20 levels deep
}
processor := json.New(config)
defer processor.Close()
// This will fail if path depth exceeds limit
result, err := processor.Get(jsonString, "a.b.c.d.e.f.g.h.i.j.k.l.m.n.o.p.q.r.s.t.u")File operations include comprehensive path validation:
processor := json.New()
defer processor.Close()
// Safe file operations
err := processor.ReadFile("./data/config.json") // ✓ Safe
err = processor.WriteFile("./output/result.json", data) // ✓ Safe
// Unsafe operations (will be rejected)
err = processor.ReadFile("../../../etc/passwd") // ✗ Path traversal
err = processor.ReadFile("/etc/shadow") // ✗ System directory
err = processor.ReadFile("file\x00.json") // ✗ Null byteAccess to sensitive system directories is blocked:
// Blocked directories (Unix/Linux):
// - /etc/
// - /proc/
// - /sys/
// - /dev/
// Example:
err := processor.ReadFile("/etc/passwd")
// Returns error: "access to system directories not allowed"File operations respect size limits:
config := &json.Config{
MaxJSONSize: 10 * 1024 * 1024, // 10MB limit
}
processor := json.New(config)
defer processor.Close()
// Files larger than MaxJSONSize will be rejected
err := processor.ReadFile("large_file.json")
if err != nil {
if errors.Is(err, json.ErrSizeLimit) {
log.Printf("File too large: %v", err)
}
}File write operations include safety checks:
// Write with validation
data := map[string]interface{}{
"status": "success",
"data": result,
}
err := processor.WriteFile("output.json", data)
if err != nil {
log.Printf("Write failed: %v", err)
}
// The library will:
// 1. Validate the file path
// 2. Check for path traversal
// 3. Validate the data before writing
// 4. Use atomic writes when possibleThe cache automatically excludes sensitive data:
// Data containing sensitive keywords is not cached
sensitiveData := `{
"username": "john",
"password": "secret123",
"api_token": "abc123xyz"
}`
// This will not be cached due to sensitive keywords
result, err := processor.Get(sensitiveData, "username")
// Sensitive patterns detected:
// - password
// - token
// - secret
// - key
// - auth
// - credentialCache keys use secure hashing:
// Cache keys are generated using SHA-256
// This prevents:
// 1. Cache key collision attacks
// 2. Cache timing attacks
// 3. Information disclosure through cache keys
// Example internal implementation:
hash := sha256.Sum256([]byte(input))
cacheKey := fmt.Sprintf("%x", hash[:16])Cache keys are validated to prevent injection:
// Invalid cache keys are rejected
// Validation includes:
// - Length checks
// - Character validation
// - Pattern matching
// - Null byte detectionPrevent memory exhaustion through cache limits:
config := &json.Config{
EnableCache: true,
MaxCacheSize: 1000, // Maximum 1000 entries
CacheTTL: 5 * time.Minute, // 5 minute expiration
}
processor := json.New(config)
defer processor.Close()
// Cache will automatically evict old entries when full
// Uses LRU (Least Recently Used) eviction policyError messages are sanitized to prevent information disclosure:
processor := json.New()
defer processor.Close()
_, err := processor.Get(jsonString, "user.password")
if err != nil {
// Error messages sanitize sensitive paths
fmt.Printf("Error: %v\n", err)
// Output: "JSON get failed at path '[REDACTED_PATH]': ..."
}Use error type checking for proper handling:
result, err := processor.Get(jsonString, "path")
if err != nil {
var jsonsErr *json.JsonsError
if errors.As(err, &jsonsErr) {
switch {
case errors.Is(jsonsErr.Err, json.ErrSizeLimit):
log.Printf("Size limit exceeded: %v", err)
case errors.Is(jsonsErr.Err, json.ErrSecurityViolation):
log.Printf("Security violation detected: %v", err)
case errors.Is(jsonsErr.Err, json.ErrInvalidPath):
log.Printf("Invalid path: %v", err)
default:
log.Printf("Operation failed: %v", err)
}
}
}Errors include context without exposing sensitive data:
var jsonsErr *json.JsonsError
if errors.As(err, &jsonsErr) {
fmt.Printf("Operation: %s\n", jsonsErr.Op)
fmt.Printf("Message: %s\n", jsonsErr.Message)
fmt.Printf("Error Code: %s\n", jsonsErr.ErrorCode)
// Context is sanitized
if jsonsErr.Context != nil {
fmt.Printf("Context: %+v\n", jsonsErr.Context)
}
// Suggestions for fixing the error
for _, suggestion := range jsonsErr.Suggestions {
fmt.Printf("Suggestion: %s\n", suggestion)
}
}Choose the right configuration for your environment:
// Development environment
devConfig := json.DefaultConfig()
devConfig.StrictMode = false
devConfig.EnableValidation = true
// Production environment
prodConfig := json.HighSecurityConfig()
prodConfig.EnableValidation = true
prodConfig.StrictMode = true
prodConfig.EnableRateLimit = true
// High-security environment (financial, healthcare, etc.)
secureConfig := json.HighSecurityConfig()
secureConfig.MaxJSONSize = 1 * 1024 * 1024 // 1MB only
secureConfig.MaxPathDepth = 10 // Very shallow
secureConfig.MaxNestingDepthSecurity = 10 // Very restrictiveAlways validate JSON from external sources:
// Bad: No validation
result, _ := processor.Get(untrustedJSON, "data")
// Good: With validation
processor := json.New(&json.Config{
EnableValidation: true,
StrictMode: true,
})
defer processor.Close()
result, err := processor.Get(untrustedJSON, "data")
if err != nil {
log.Printf("Validation failed: %v", err)
return
}Implement schema validation for important data:
// Define strict schema for user input
userSchema := &json.Schema{
Type: "object",
Properties: map[string]*json.Schema{
"username": {
Type: "string",
MinLength: 3,
MaxLength: 50,
Pattern: "^[a-zA-Z0-9_]+$",
},
"email": {
Type: "string",
Format: "email",
},
},
Required: []string{"username", "email"},
AdditionalProperties: false, // Reject unknown properties
}
// Validate before processing
errors, err := processor.ValidateSchema(userInput, userSchema)
if err != nil || len(errors) > 0 {
// Handle validation errors
return
}Protect against DoS attacks with rate limiting:
config := &json.Config{
EnableRateLimit: true,
RateLimitPerSec: 1000, // Max 1000 operations per second
}
processor := json.New(config)
defer processor.Close()
// Operations will be rate-limited automatically
for i := 0; i < 10000; i++ {
result, err := processor.Get(jsonString, "data")
if errors.Is(err, json.ErrRateLimitExceeded) {
// Handle rate limit
time.Sleep(time.Millisecond * 100)
continue
}
}Set timeouts to prevent resource exhaustion:
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Use context for operations (if supported)
// This prevents long-running operations from blockingTrack resource usage in production:
// Get processor statistics
stats := processor.GetStats()
fmt.Printf("Operations: %d\n", stats.OperationCount)
fmt.Printf("Cache Hit Ratio: %.2f%%\n", stats.HitRatio*100)
fmt.Printf("Error Count: %d\n", stats.ErrorCount)
// Monitor memory usage
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Alloc: %d MB\n", m.Alloc/1024/1024)
fmt.Printf("TotalAlloc: %d MB\n", m.TotalAlloc/1024/1024)Ensure logs don't contain sensitive information:
// Bad: Logging raw data
log.Printf("Processing: %s", jsonString)
// Good: Logging sanitized information
log.Printf("Processing JSON of size: %d bytes", len(jsonString))
// Good: Using error messages (already sanitized)
if err != nil {
log.Printf("Operation failed: %v", err)
}Always close processors to free resources:
// Use defer to ensure cleanup
processor := json.New()
defer processor.Close()
// Or use explicit cleanup
processor := json.New()
// ... use processor ...
processor.Close()
// Check if processor is closed
_, err := processor.Get(jsonString, "data")
if errors.Is(err, json.ErrProcessorClosed) {
log.Printf("Processor is closed")
}Use proper synchronization for concurrent operations:
// The processor is thread-safe by default
processor := json.New()
defer processor.Close()
// Safe concurrent access
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
result, err := processor.Get(jsonString, fmt.Sprintf("items[%d]", id))
if err != nil {
log.Printf("Worker %d error: %v", id, err)
}
// Process result...
}(i)
}
wg.Wait()Use multiple layers of security:
// Layer 1: Network/Transport security (TLS, authentication)
// Layer 2: Input validation
config := json.HighSecurityConfig()
processor := json.New(config)
defer processor.Close()
// Layer 3: Schema validation
schema := defineStrictSchema()
errors, err := processor.ValidateSchema(input, schema)
if err != nil || len(errors) > 0 {
return fmt.Errorf("validation failed")
}
// Layer 4: Business logic validation
if !isValidBusinessData(input) {
return fmt.Errorf("business validation failed")
}
// Layer 5: Rate limiting and monitoring
if rateLimitExceeded() {
return fmt.Errorf("rate limit exceeded")
}
// Process the data
result, err := processor.Get(input, "data")Use this checklist to ensure your implementation is secure:
- Use
HighSecurityConfig()for production environments - Set appropriate
MaxJSONSizelimits - Configure
MaxPathDepthandMaxNestingDepthSecurity - Enable
EnableValidationandStrictMode - Configure rate limiting with
EnableRateLimit - Set reasonable
MaxConcurrencylimits - Configure cache limits with
MaxCacheSizeandCacheTTL
- Validate all external JSON input
- Use schema validation for critical data
- Check JSON size before processing
- Validate path strings before use
- Sanitize user-provided paths
- Implement input sanitization at application level
- Validate all file paths
- Restrict file operations to specific directories
- Check file sizes before reading
- Use absolute paths when possible
- Implement file access logging
- Set appropriate file permissions
- Check all error returns
- Use typed error checking with
errors.Asanderrors.Is - Log errors appropriately (without sensitive data)
- Implement error recovery mechanisms
- Don't expose internal errors to users
- Provide helpful error messages
- Monitor resource usage (CPU, memory)
- Track operation counts and error rates
- Monitor cache hit ratios
- Set up alerts for anomalies
- Log security-relevant events
- Implement health checks
- Test with malicious inputs
- Test with oversized payloads
- Test with deeply nested structures
- Test concurrent access patterns
- Test error handling paths
- Perform security audits regularly
func ProcessUserJSON(userInput string) error {
// Use high security configuration
config := json.HighSecurityConfig()
config.MaxJSONSize = 1 * 1024 * 1024 // 1MB limit for user input
processor := json.New(config)
defer processor.Close()
// Define strict schema
schema := &json.Schema{
Type: "object",
Properties: map[string]*json.Schema{
"name": {
Type: "string",
MinLength: 1,
MaxLength: 100,
},
"email": {
Type: "string",
Format: "email",
},
"age": {
Type: "number",
Minimum: 0,
Maximum: 150,
},
},
Required: []string{"name", "email"},
AdditionalProperties: false,
}
// Validate against schema
validationErrors, err := processor.ValidateSchema(userInput, schema)
if err != nil {
return fmt.Errorf("validation error: %w", err)
}
if len(validationErrors) > 0 {
return fmt.Errorf("invalid input: %v", validationErrors)
}
// Process the validated data
name, err := processor.Get(userInput, "name")
if err != nil {
return fmt.Errorf("failed to get name: %w", err)
}
// Continue processing...
log.Printf("Processing user: %v", name)
return nil
}func ProcessAPIRequests() {
// Configure for high volume with rate limiting
config := &json.Config{
MaxJSONSize: 5 * 1024 * 1024,
MaxPathDepth: 50,
MaxNestingDepthSecurity: 30,
EnableValidation: true,
EnableCache: true,
MaxCacheSize: 10000,
CacheTTL: 10 * time.Minute,
EnableRateLimit: true,
RateLimitPerSec: 5000,
MaxConcurrency: 100,
}
processor := json.New(config)
defer processor.Close()
// Process requests with monitoring
ticker := time.NewTicker(1 * time.Minute)
defer ticker.Stop()
go func() {
for range ticker.C {
stats := processor.GetStats()
log.Printf("Stats - Ops: %d, Cache Hit: %.2f%%, Errors: %d",
stats.OperationCount, stats.HitRatio*100, stats.ErrorCount)
}
}()
// Handle requests...
}func ProcessSensitiveData(jsonData string) error {
// Disable caching for sensitive data
config := json.HighSecurityConfig()
config.EnableCache = false // Don't cache sensitive data
processor := json.New(config)
defer processor.Close()
// Process without caching
opts := &json.ProcessorOptions{
CacheResults: false,
StrictMode: true,
}
result, err := processor.Get(jsonData, "sensitive.data", opts)
if err != nil {
// Don't log the actual data
log.Printf("Failed to process sensitive data: operation failed")
return err
}
// Process result securely
// ... (ensure result is not logged or cached)
return nil
}func LoadSecureConfig(configPath string) (*Config, error) {
// Validate file path
if !isValidConfigPath(configPath) {
return nil, fmt.Errorf("invalid config path")
}
processor := json.New(json.HighSecurityConfig())
defer processor.Close()
// Read and validate config file
err := processor.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read config: %w", err)
}
// Define config schema
configSchema := &json.Schema{
Type: "object",
Properties: map[string]*json.Schema{
"database": {
Type: "object",
Properties: map[string]*json.Schema{
"host": {Type: "string"},
"port": {Type: "number", Minimum: 1, Maximum: 65535},
},
Required: []string{"host", "port"},
},
},
Required: []string{"database"},
}
// Validate config structure
jsonStr, _ := processor.Encode(processor)
errors, err := processor.ValidateSchema(jsonStr, configSchema)
if err != nil || len(errors) > 0 {
return nil, fmt.Errorf("invalid config structure")
}
// Parse config...
return parseConfig(jsonStr)
}
func isValidConfigPath(path string) bool {
// Only allow config files in specific directory
allowedDir := "./config/"
absPath, err := filepath.Abs(path)
if err != nil {
return false
}
allowedAbsDir, err := filepath.Abs(allowedDir)
if err != nil {
return false
}
return strings.HasPrefix(absPath, allowedAbsDir)
}- Risk: Path injection attacks
- Mitigation: Automatic path validation and sanitization
- Best Practice: Validate user-provided paths before use
- Risk: Resource exhaustion through large batches
- Mitigation:
MaxBatchSizeconfiguration limit - Best Practice: Set appropriate batch size limits
- Risk: Information disclosure through cache
- Mitigation: Automatic sensitive data detection
- Best Practice: Disable caching for sensitive data
- Risk: Path traversal and unauthorized file access
- Mitigation: Path validation and system directory blocking
- Best Practice: Restrict file operations to specific directories
- Risk: Race conditions and resource exhaustion
- Mitigation: Thread-safe implementation and concurrency limits
- Best Practice: Set appropriate
MaxConcurrencylimits
- Risk: Accepting invalid or malicious data
- Mitigation: Comprehensive schema validation
- Best Practice: Define strict schemas for all external input
If you discover a security vulnerability in this library, please report it responsibly:
- Do NOT open a public GitHub issue
- Do NOT disclose the vulnerability publicly until it has been addressed
- DO email security details to: [security contact email]
- DO provide detailed information about the vulnerability
- DO include steps to reproduce if possible
- Description of the vulnerability
- Affected versions
- Steps to reproduce
- Potential impact
- Suggested fix (if available)
- Your contact information
- Initial Response: Within 48 hours
- Vulnerability Assessment: Within 1 week
- Fix Development: Depends on severity
- Public Disclosure: After fix is released
- Security updates are released as patch versions
- Critical vulnerabilities are addressed immediately
- Security advisories are published on GitHub
- Users are notified through release notes
- Main README - Library overview and features
- Compatibility Guide - Compatibility information
- Examples - Code examples for all features
- Go Security Checker - Static analysis
- Go Vulnerability Database - Known vulnerabilities
- OWASP Go Security - Security guidelines
- OWASP Top 10 - Common security risks
- CWE Top 25 - Most dangerous software weaknesses
- Go Security Best Practices - Official Go security guide
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
If you find this project useful, please consider giving it a star! ⭐
Made with ❤️ by the CyberGoDev team