-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
fix(v9/core): Ensure logs past MAX_LOG_BUFFER_SIZE are not swallowed
#18213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| // Add one more to trigger flush | ||
| _INTERNAL_captureLog({ level: 'info', message: 'trigger flush' }, client, undefined); | ||
|
|
||
| expect(_INTERNAL_getLogBuffer(client)).toEqual([]); | ||
| // After flushing the 100 logs, the new log starts a fresh buffer with 1 item | ||
| const buffer = _INTERNAL_getLogBuffer(client); | ||
| expect(buffer).toHaveLength(1); | ||
| expect(buffer?.[0]?.body).toBe('trigger flush'); | ||
| }); | ||
|
|
||
| it('does not flush logs buffer when it is empty', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: New logs are silently discarded when the log buffer reaches its maximum size and triggers a flush.
Severity: CRITICAL | Confidence: 1.00
🔍 Detailed Analysis
When the log buffer reaches MAX_LOG_BUFFER_SIZE and a new log is added, the system incorrectly flushes the old buffer before the new log is fully processed. Specifically, _INTERNAL_flushLogsBuffer is called with the logBuffer (100 items) before the serializedLog is properly accounted for. Subsequently, _getBufferMap().set(client, []) clears the buffer, overwriting the newly updated buffer that contained the 101st log. This results in the most recent log being silently discarded.
💡 Suggested Fix
Modify the logic to either check the new buffer size before flushing or ensure the new log is preserved after the buffer is cleared by the flush operation.
🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: packages/core/test/lib/logs/exports.test.ts#L245-L254
Potential issue: When the log buffer reaches `MAX_LOG_BUFFER_SIZE` and a new log is
added, the system incorrectly flushes the old buffer before the new log is fully
processed. Specifically, `_INTERNAL_flushLogsBuffer` is called with the `logBuffer` (100
items) before the `serializedLog` is properly accounted for. Subsequently,
`_getBufferMap().set(client, [])` clears the buffer, overwriting the newly updated
buffer that contained the 101st log. This results in the most recent log being silently
discarded.
Did we get this right? 👍 / 👎 to inform future reviews.
Reference_id: 2693436
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, that's what this PR fixes with ba24997.
size-limit report 📦
|
JPeer264
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Nice catch
ba24997 to
45d7680
Compare
Looks like we swallowed the log that triggers a flush when
MAX_LOG_BUFFER_SIZEis surpassed.Test demonstrating issue: e3a8e2f
Fix: ba24997
v10 equivalent: #18207