Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Fix memory corruption and leaks on clisp#196

Open
vibs29 wants to merge 9 commits intocl-plus-ssl:masterfrom
vibs29:clisp
Open

Fix memory corruption and leaks on clisp#196
vibs29 wants to merge 9 commits intocl-plus-ssl:masterfrom
vibs29:clisp

Conversation

@vibs29
Copy link
Contributor

@vibs29 vibs29 commented Nov 26, 2025


Files
ffi-buffer.lisp
ffi-buffer-clisp.lisp
streams.lisp
random.lisp
x509.lisp


Memory corruption bug on clisp

s/b-replace
b/s-replace
had bugs that could cause them to miscalculate the buffer's end as being
beyond its boundary, or miscalculate the number of bytes to copy if the
buffer's end was specified but the sequence was smaller.
All callers of s/b-replace happened to pass arguments that didn't trigger
its bugs.
But one caller of b/s-replace (namely stream-write-sequence) could
legitimately call it in a way that did trigger one of its bugs.
E.g. if the buffer was smaller than the sequence, it would corrupt memory
by writing beyond the buffer's bounds.

I have fixed all the bugs, which were in s/b-replace and b/s-replace.


Performance

For clisp:

b/s-replace also copies less.
The old version always called subseq, which copies.
The new version copies only if the source seq is not a vector.

s/b-replace is not expected to allocate memory proportional to its
input arrays.
But due to its call to memory-as, it did.
Now it doesn't. It allocates O(1) memory, regardless of input array sizes.

b/s-replace also allocates O(1) memory now.

For all lisps:

stream-read-sequence
I have made this clearer.

stream-write-sequence
I have rewritten this to be clearer.
And faster. The old version could flush a non-full stream.
As a pathological case,
writing 1 byte, then 2048 times writing 2049 bytes, would cause 4096
flushes.
Now that will cause only 2049 flushes.


Memory leak on clisp

There was also a memory leak because foreign buffers were allocated but
never freed. I have fixed this by extending the buffer API and having
all callers of make-buffer also release the buffer when finished with it.

----
Files
  ffi-buffer.lisp
  ffi-buffer-clisp.lisp
  streams.lisp
  random.lisp
  x509.lisp

----
Memory corruption bug on clisp

s/b-replace
b/s-replace
had bugs that could cause them to miscalculate the buffer's end as being
beyond its boundary, or miscalculate the number of bytes to copy if the
buffer's end was specified but the sequence was smaller.
All callers of s/b-replace happened to pass arguments that didn't trigger
its bugs.
But one caller of b/s-replace (namely stream-write-sequence) could
legitimately call it in a way that did trigger one of its bugs.
E.g. if the buffer was smaller than the sequence, it would corrupt memory
by writing beyond the buffer's bounds.

I have fixed all the bugs, which were in s/b-replace and b/s-replace.

----
Performance

For clisp:

b/s-replace also copies less.
The old version always called subseq, which copies.
The new version copies only if the source seq is not a vector.

s/b-replace is not expected to allocate memory proportional to its
input arrays.
But due to its call to memory-as, it did.
Now it doesn't. It allocates O(1) memory, regardless of input array sizes.

b/s-replace also allocates O(1) memory now.

For all lisps:

stream-read-sequence
I have made this clearer.

stream-write-sequence
I have rewritten this to be clearer.
And faster. The old version could flush a non-full stream.
As a pathological case,
  writing 1 byte, then 2048 times writing 2049 bytes, would cause 4096
  flushes.
Now that will cause only 2049 flushes.

----
Memory leak on clisp

There was also a memory leak because foreign buffers were allocated but
never freed. I have fixed this by extending the buffer API and having
all callers of make-buffer also release the buffer when finished with it.
@avodonosov
Copy link
Member

re #120

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What motivates the changes in this file, and what is the changes nature? Note, decode-certificate and decode-certificate-from-file are public functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you're right. Sorry, please do not merge. I mistakenly assumed decode-certificate was private. But it's not and therefore I mustn't change its signature.

The motivation (irrelevant now) was that cffi's with-pointer-to-buffer (wptvd) was a bad API, cl+ssl's adaptation of it was better, to move x509.lisp to the new one as an example of how the new one was to be used, including releasing memory, and so that all of cl+ssl only used the new API rather than parts using cffi's and parts using cl+ssl's. Changing x509 was not necessary for any other reason. It had no bugs or leaks. (Other than that all current uses of cffi's experimental and bad wptvd API have a potential future leak if cffi ever adds an implementation for a Lisp that requires explicitly freeing the buffer.)

Since decode-certificate is public, x509.lisp must not be changed. Very sorry about this.

I have now checked all other functions whose signatures I changed or bodies I removed everywhere to make sure that none of them were public. They aren't. Other files besides x509.lisp are fit to merge.

Ideally, a future version of decode-certificate would take as input a vec parameter specified to be of ordinary Lisp type vector. Not specified to be a cffi shareable-byte-vector or a cl+ssl ffi-buffer. Those are implementation details of cl+ssl. Internally, it could create whatever it wanted from that vec. That would reduce the burden on users by not requiring them to know of this foreign data business. And it would let decode-certificate be written more robustly, i.e. to free the foreign data it had itself created. And the data is small (the size of a certificate file) so it's utterly unimportant to try to avoid making one copy of it from Lisp space to C space. If there's a deprecation process for cl+ssl functions, then that can be used to provide a good alternative function and deprecate the old one. If there isn't, then there's nothing one can do. In any case, x509.lisp is not central to this pull request, so should probably now be ignored and not distract from what is central.

Would you like to have the rest of the files? If so, what's the best administrative method? I can create a fresh pull request with a fresh branch that omits the x509.lisp change. (I suppose I could also look into making a second commit on this branch that restores the original x509.lisp and then git squashing or something, but I'm not a squashing expert and will only look into doing that if you greatly prefer that to my opening a whole new pull request.)

@avodonosov
Copy link
Member

How to avoid unnecessary changes in unmodified files is a secondary question. We will solve that after we agreed on the final version.

But first I need to understand all the changes. I haven't digested your branch yet.

The comments in src/ffi-buffer-clisp.lisp say you got a significant speed up and attribute that to "copying via a single foreign call to MEMORY-AS instead of one foreign call per element via %MEM-REF.' Could you point where this mem-ref per element happens in the old code?

What whould be good to have is self-contained test cases demonstrating the bug in the old code and pass with the new code, and also covering all branches of the copying code changed and introduced. Do you see a sufficienty easy and practical way to implement such tests?

Are you open go have a call to help me understand the pull request?

@vibs29
Copy link
Contributor Author

vibs29 commented Nov 26, 2025

A call will be great. Sent you email.

This is my test file that reliably demonstrates cl+ssl corrupting memory on clisp.
The bug is fixed by my patch.
Edit the file's *url* to some https one before running it.

; load drakma and base libraries only if not previously loaded
(eval-when (:load-toplevel :compile-toplevel :execute)
  (unless (find-package "DRAKMA")
    (asdf:operate 'asdf:load-op "drakma")))

(use-package "DRAKMA")

; load latest edits of cl+ssl on every load of the file
(asdf:operate 'asdf:load-op "cl+ssl")

(defparameter *url* "https://www.example.com/")

(defun test (nbytes)
  (let ((content
          (make-array nbytes :element-type 'character :initial-element #\.)))
    (http-request *url*
                  :method :post
                  :content #'(lambda (stream) (princ content stream))
                  ;:content content
                  )))

; simpler test. causes exit.
(defun test2 ()
  (let ((cl+ssl:*default-buffer-size* 32))
    (drakma:http-request *url*)))

; these fail but shouldn't
; (test2)
; (test 2049)        ; when content is a lambda
; (test (+ 2048 128)); when content is a string

As for testing the new code, this is a test for the random function,

(defun test-random ()
  (cl+ssl:random-bytes 8))

And this is a test to verify that the various code paths through the new clisp code work correctly.

;;;; tests cl+ssl's ffi-buffer-clisp.lisp's s/b-replace and b/s-replace.
;;;; a successful run is when (test-ffi-buffer-clisp) raises no errors.

(in-package "CL+SSL")
(export '(TEST-FFI-BUFFER-CLISP))

(defun create-test-buffer (length &optional data)
  (let ((result (make-buffer length)))
    (dotimes (i (buffer-length result) result)
      (setf (buffer-elt result i) (if data (aref data i) 0)))))

(defun buffer-equal (expected-vec buf)
  (dotimes (i (length expected-vec) t)
    (unless (equal (aref expected-vec i) (buffer-elt buf i))
      (return nil))))

(defun test-b/s-replace ()
  (mapc #'(lambda (vec expected-buf)
            (mapc #'(lambda (seq)
                      (mapc #'(lambda (*mem-max*)
                                (let ((buf (create-test-buffer 4)))
                                  (unwind-protect
                                    (let ((end (min (buffer-length buf)
                                                    (length seq))))
                                      (b/s-replace buf seq :start1 0 :end1 end
                                                           :start2 0 :end2 end)
                                      (assert (buffer-equal expected-buf buf)))
                                    (release-buffer buf))))
                            (list *mem-max* 2)))
                  (list vec (map 'list #'identity vec))))
        (list #(0 1 2)   #(0 1 2 3) #(0 1 2 3 4))
        (list #(0 1 2 0) #(0 1 2 3) #(0 1 2 3)))
  (values))

(defun test-s/b-replace ()
  (mapc #'(lambda (vec-len buf-data expected-vec)
            (mapc #'(lambda (*mem-max*)
                      (let ((buf (create-test-buffer (length buf-data)
                                                     buf-data))
                            (vec (make-array vec-len
                                             :element-type '(unsigned-byte 8)
                                             :initial-element 0)))
                        (unwind-protect
                          (let ((end (min (buffer-length buf) (length vec))))
                            (s/b-replace vec buf :start1 0 :end1 end
                                                 :start2 0 :end2 end)
                            (assert (equalp expected-vec vec)))
                          (release-buffer buf))))
                  (list *mem-max* 2)))
        (list 2          4          6)
        (list #(0 1 2 3) #(0 1 2 3) #(0 1 2 3))
        (list #(0 1)     #(0 1 2 3) #(0 1 2 3 0 0)))
  (values))

(defun test-ffi-buffer-clisp ()
  (test-b/s-replace)
  (test-s/b-replace))

Sorry these tests aren't more orderly and aren't in line with cl+ssl's testing standard. I did briefly look at whether I could make them so, then decided it was more effort that I was willing to make. So here they are as is, since you asked, for whatever they are worth.

Performance:

I didn't comment that I had achieved a huge speedup. The commenter above me said they had achieved a huge speedup. I was merely hypothesizing that the reason for that speedup couldn't have been lack of copying as he'd claimed because his code also copied. So the reason must have been the style of copying (memory-as instead of %mem-ref). The code before his (i.e. that he improved on) is in cffi's cffi-clisp.lisp (from let's say last week). My present performance improvements over the previous commenter's are not as great as his performance improvements over what came before.

(The reason I say to not look at the very latest cffi code is that I recently submitted a patch so that that too uses memory-as instead of %mem-ref for a large speedup, for the benefit of users who aren't cl+ssl.)

The reason I started working on this was that cl+ssl would crash the process on clisp because ffi-buffer-clisp.lisp had miscalculations about buffer boundaries and was writing into C memory well past array boundaries. I improved performance and fixed the memory leak as side effects of working on that primary bug.

@avodonosov
Copy link
Member

I wonder if the problems addressed in this pull request also caused #163

@vibs29
Copy link
Contributor Author

vibs29 commented Nov 26, 2025

I'm unfamiliar with the bio code. I've taken a quick look at the write-puts test. I don't think what I've done will directly fix that. However, remembering that stream-write-sequence would on clisp illegally overwrite arbitrary parts of the C memory, if any of the tests called stream-write-sequence then all subsequent behaviour of that process is unpredictable. It'll certainly be worth trying the test suite of 163 to see if its problem has disappeared with this patch. And it's not worth reasoning about any misbehaviour on clisp prior to applying this patch. cl+ssl on clisp is dangerous and should never be used without this patch.

stream-listen
stream-read-byte
are written to potentially have horrible performance.

The reason is that cffi's experimental Shareable Byte Vector interface
is bad. It indicates that although with-pointer-to-vector-data (wptvd)
with an empty body may be constant time (due to the vector's being shared
between Lisp and C), it may also be O(len(vector)) on implementations
that require copying in/out. Given that cffi is a portability layer,
the name make-shareable-byte-vector is misleading if the result isn't
necessarily shared between C and Lisp but may require copying. Anyway,
bad cffi names aside, callers must therefore treat wptvd as having
O(len(vector)) overhead since that is what its documentation indicates.

In that light, stream-listen and stream-read-byte, which are written to
potentially cause the copying of the entire underlying buffer when they
only want to operate on one byte, are badly written. They are written
with a mistaken assumption, that wptvd has O(1) overhead, but it is
really specified by cffi to have O(len(vector)) overhead.

One solution is for the stream to own another one byte input buffer for
these two functions to use. The input buffer is only used as a temporary
variable within a function. It doesn't retain meaningful state between
function calls. So this solution is easy to implement.

As it happens, with the clisp-specific ffi-buffer-clisp.lisp,
the clisp wptvd in cl+ssl is O(1) so there happens not to be a
problem presently. All current wptvd's in use by cl+ssl are O(1). But
stream-listen and stream-read-byte should still not be written to assume
that the general wptvd has O(1) overhead. A new wptvd may be added
to cffi or an old one rewritten to have the allowable O(len(vector))
overhead. Thus stream-listen and stream-read-byte should really be
corrected to reflect that wptvd does not guarantee that implementations
will have less than O(len(vector)) overhead. As written, they are just
lucky they happen to perform well because of the implementation details
of the current wptvd's in existence.

This commit implements the solution of having a separate small buffer
for stream-listen and stream-read-byte to use.

Performance will not improve today. But this design will ensure that
cl+ssl doesn't in the future suddenly mysteriously develop horrible
performance due to relying on implementation details of wptvd adding O(1)
overhead when it is really specified to add O(len(vector)) overhead.
@vibs29
Copy link
Contributor Author

vibs29 commented Nov 27, 2025

This is how I validate the last commit 45c439f "Stabilise performance for byte-sized operations". *url* needs editing first. Expect output of about one dot, lots of dashes, then a string form of the http response.

; insert the following traces into cl+ssl's streams.lisp to ensure it is called.
; stream-listen
;   (princ #\.) (finish-output)
; stream-read-byte
;   (princ #\-) (finish-output)

(defparameter *url* "https://...")

(defun test-byte ()
  "Exercises stream-listen and stream-read-byte"
  (multiple-value-bind (stream code headers uri stream2 must-close reason)
                       (drakma:http-request *url* :want-stream t)
    (unwind-protect
      (let ((str (chunga:chunked-stream-stream
                   (flexi-streams:flexi-stream-stream stream))))
        (print (list 'str-type (type-of str)))
        (do () ((listen str)))
        (princ
          (with-output-to-string (o)
            (handler-case
                (do ((b (read-byte str nil :eof)
                        (read-byte str nil :eof)))
                    ((eq b :eof))
                  (write-char (code-char b) o)) ;assume character
              (cl+ssl::ssl-error-ssl (e)
                (print e)
                ;SSL_get_error specifies not to call SSL_shutdown
                ;(but now your test leaks C buffer memory.)
                ;really, the error should be trapped by stream-read-byte
                (setf must-close nil))))))
      (when must-close
        (close stream))))
  (values))

(setf (ssl-stream-peeked-byte stream) nil))
(handler-case
(let ((buf (ssl-stream-input-buffer stream))
(let ((buf (ssl-stream-input-buffer-small stream))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why reading into a single-byte buffer is better than reading one byte into the big buffer in stream-listen and stream-read-byte?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Anton. I've described this over several paragraphs in the commit message. I can also explain over the call. This isn't as important as the prior commits, so it's fruitless discussing it before the prior commits have been understood. I can't explain better in writing than I have in the commit message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I missed the commit message (I looked in the pull request conversation and in the code comments before asking though).

I see now in the commit message you adress that question.

@vibs29
Copy link
Contributor Author

vibs29 commented Nov 27, 2025

Eventual Design Ideal

I see that cffi and cl+ssl are both MIT licenced. Ideally, cffi would deprecate its own wptvd API and copy cl+ssl's improved one from ffi-buffer.lisp and ffi-buffer-clisp.lisp into its own codebase, but would renumber the functions etc. to have a suffix of 2, to differentiate it from its old API. It could then tell its users,

  • We are portable across existing Lisps, but make no promises to ever support future Lisps because they may not enable the below
  • wptvd2 (therefore) can guarantee to add O(1) overhead
  • make-shareable-byte-vector2 (better called make-buffer) returns an object that is to be treated as opaque, i.e. an abstract piece of data. It may only be accessed through its operations in ffi-buffer, i.e. s/b-replace, b/s-replace, buffer-elt, buffer-length. Its elements are of type (unsigned-byte 8). It is not necessarily a (subtype of) Lisp array. Thus, it is not to be operated on with length or replace.
  • It may allocate foreign memory so must be released with release-buffer.
  • b/s-replace and s/b-replace require a working set of memory that is O(1) as their names suggest. They return their first arguments, just as Lisp's replace does.
  • Within the wptvd2 macro body, the C pointer and Lisp variable both refer to the same object in memory and you can read/change its elements via either. (This is not true of the old API with its allowable copy-in/out semantics.)

cl+ssl could then delete its own copy of ffi-buffer.lisp and ffi-buffer-clisp.lisp and use cffi's new API instead. Other users of cffi besides cl+ssl could upgrade to cffi's new API to benefit from the fast (and now correct, faster and O(1) memory) ffi-buffer-clisp.lisp that cl+ssl has long had for itself.

The reason the API is backward incompatible is that (a) make-buffer doesn't necessarily return a subtype of Lisp array and that (b) users who don't call release-buffer may leak memory.

The original versions of s/b-replace and b/s-replace were one liners so
declared to be inlined, but not all versions are one-liners now so not
all versions should be inlined.
(+ buf-start
(- (or seq-end (length seq))
seq-start)))
(defparameter *mem-max* 1024 "so *-REPLACE require the expected O(1) memory")
Copy link
Member

@avodonosov avodonosov Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The *mem-max* is intended to limit the allocations done by memory-as, right? Question, if we copy 2048 bytes, calling memory-as twice for 1024 bytes, is it really better than a single call for 2048 bytes? The total amount of allocated memory is the same. If it is garbage collected, collecting two objects may be more work for GC than one object. Is it really beneficial?

Also, in context of cl+ssl, the maximum size of arrays copied with s/b-replace, b/s-replace is limited by the buffer size, which defaults to 2048 bytes. Not a big difference from the 1024 *mem-max* here.

Are you thinking of the *-replace functions as more general purpose utilities, that must be prepared to any size of collections copied?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, *mem-max* also limits the intermediate array when b/s-replace is called with a list as the sequence to be copied to the biffer. The same questions apply for that case.

@vibs29
Copy link
Contributor Author

vibs29 commented Nov 28, 2025

*mem-max* ensures the working set size that s/b-replace and b/s-replace require is O(1) and not proportional to the size of input arrays. It may be defined as any number, but it must be a number. Otherwise that won't be true any more.

The *-replace functions seem inspired by the Lisp replace function, which is assumed to require constant memory, so *-replace should also be made constant memory otherwise their users will be surprised. It can't use constant memory, but it can use constant active (working) memory. Yes, that means the garbage collector must get involved but we can't do better.

If you make the number 2048, future programmers might not understand this and may think it needs to be kept in sync with the other 2048. Also, yes, the buffer code is at a lower level than code that uses it and should be independent of such code (e.g. streams.lisp). It really belongs in cffi. (And may someday get there: cffi/cffi#421 but that's a separate matter.) *-replace should not require memory proportional to their input arrays, which arrays you can imagine someone changing in the future to 1 MiB or 16 MiB, not expecting that every time they call *-replace they are using another 16 MiB of memory (which they may not even have and thus crash), making the replace word very misleading.

Whether the garbage collector collects one piece of data or two is immaterial. What's material is that the two numbers be kept separate, even into the future. Make it 2048 and future programmers are less likely to understand this. But it's fine to make it 2048 if future programmers can be made to understand this some other way. The simplest way I could manage was what I did. What matters is to keep the two numbers separate and not think they must be varied together. Anything that forever keeps *-replace at O(1) working memory is correct.

update: *default-buffer-size* is public so users may make it 128 GiB for some of their streams. *-replace certainly shouldn't require an additional active 128 GiB per call in that case. So *mem-max* is even more important than I'd thought. And it must have a constant value not settable by the user.

@avodonosov
Copy link
Member

avodonosov commented Dec 1, 2025

My comments about the Eventual Design Ideal:

The cffi:with-pointer-to-vector-data is not a bad API, it's just intended for a different use-case. Namely, for applications which want to call C functions like read or SSL_read directly, and have these functions to directly populate a lisp sequence, with zero intermediate copies. On lisp implementations that provide zero copying cffi:with-pointer-to-vector-data it is an ideal API for that case: give C function a native foreign pointer to the memory underlying a lisp sequence and then work with the sequence in Lisp using standard sequence functions.

In cl+ssl we have a different arrangement: we read into a buffer and then copy from the buffer to another sequence. So we may want better support in CFFI for this use case without deprecating cffi:with-pointer-to-vector-data.

Next, I think there is no need for CFFI to introduce an opaque buffer abstraction.

Applications can just use foreign-alloc and foreign-free for buffer allocation / release, men-ref and mem-aref for element-wise access.

The only thing missing is bulk copy between a foreign memory array and lisp sequence (b/s-relpace, s/b-replace in terms of cl+ssl). If CFFI introduced functions for that, it will be feature complete for the use case we have in cl+ssl.

A big design question is how generic these bulk copy functions should be with respect to lisp sequence type: support only simple vectors, any vectors, or both vectors and lists. This depends on the possibilities we can reasonably expect most lisp implementations can efficiently provide.

@avodonosov
Copy link
Member

avodonosov commented Dec 1, 2025

Moreover, cffi:with-pointer-to-vector-data, if implemented for a particilar lisp as a zero copy operation, is perfectly suitable for the cl+ssl use case.

The only problem is that for some lisps cffi:with-pointer-to-vector-data performs two redundant copies of the vector data (in the worst case even iterating its elemnts in one by one).

So the question is: can cffi provode more primitive bulk copying operations that are supported on more lisps than currently support the zero copy cffi:with-pointer-to-vector-data.

A relevant doc: https://github.com/cffi/cffi/blob/master/doc/mem-vector.txt

@avodonosov
Copy link
Member

I undersdod what may be the motivation for opaque buffer abstraction in cffi: if efficient bulk co0ying operations are not available in all lisp, then some lisps may implement buffer using shareable-vector/wptvd and other lisps with foreign-alloc / bulk copy.

s/b-replace had O(n^2) running time for an input seq of length n. Now
it is O(n).

Also, s/b-replace and b/s-replace now correctly raise errors when
bounding indices are out of bounds, whereas previously they would
sometimes effectively shrink illegal arguments to become legal, which
was unintentionally different from how replace behaves. Their behaviour
is modelled on replace's.

b/s-replace also wouldn't work for zero-length buffer/sequence. Now it
does. This bug caused no harm because it is never called with zero-length
buffer/sequence.
@avodonosov
Copy link
Member

@vibs29, pleae help me see the O(n^2) run time of s/b-replace prior the commit 61444d (the commit comment says it's one of its fixes). Is it because calling replace repeatedly for sequences of type list and every time increasing :start1 we make replace iterate the list from the start again and again?

PS: I started work on this PR from integrating unit tests based on your examples, but these days I am somewhat busy with other things, so the progress is slow. I will continue in the coming days.

@vibs29
Copy link
Contributor Author

vibs29 commented Dec 4, 2025

Hi Anton. Yes, exactly right!
Super, thanks for taking a look at all this. A robust cl+ssl for clisp will be great.


I felt a bit guilty about adding commits to the same branch, but didn't know what else to feasibly do. I didn't dare submit my additional tests for the last commit above in the comment. But I think I'll paste it here now just so everything is out there and not sitting privately on my computer where nobody can see it at all. They don't use the official testing framework, which I still haven't taken the time to get the hang of. But they were still useful to me to verify that what I'd finally written did expose (latent) problems in the code before last commit that the last commit solved. It's totally fine to ignore this completely.

(in-package "CL+SSL")

(defun call-with-buffer (size fn)
  (let ((buf (make-buffer size)))
    (unwind-protect
      (funcall fn buf)
      (release-buffer buf))))

(defun expect-error (fn message)
  (assert (not (ignore-errors
                 (funcall fn)
                 t))
          ()
          message))

(defun test-s/b-replace-error-on-large-end1 ()
  (Call-with-buffer
    8
    #'(lambda (buf)
        (expect-error #'(lambda () (s/b-replace (list 0 1 2 3) buf :end1 8))
                      "s/b-replace error expected on bad bounding index"))))

;(replace nil nil) should work but I didn't support it earlier.
(defun test-b/s-replace-none ()
  (mapc #'(lambda (buflen)
            (let ((buf (make-buffer buflen)))
              (unwind-protect
                (mapc #'(lambda (empty)
                          (b/s-replace buf empty
                                       :start1 0 :end1 0 :start2 0 :end2 0))
                      (list '() #()))
                (release-buffer buf))))
        '(4 0)))

;tests b/s-replace when seq end2 is specified to be beyond boundary.
;it should raise an error.
(defun test-b/s-replace-error-on-large-end2 ()
  (call-with-buffer
    2
    #'(lambda (buf)
        (mapc #'(lambda (seq)
                  (expect-error
                    #'(lambda () (b/s-replace buf seq :end2 2))
                    "b/s-replace error expected on bad bounding index"))
              (list '(0) #(0))))))

(defun test-all ()
  (test-s/b-replace-error-on-large-end1)
  (test-b/s-replace-none)
  (test-b/s-replace-error-on-large-end2))

avodonosov added a commit that referenced this pull request Dec 10, 2025
…o that failure or exception report are more readable; fix buffer-equal; little more test cases. re #196
avodonosov added a commit that referenced this pull request Dec 11, 2025
… to b/s-replace, more test cases for b/s-replace. re #196
avodonosov added a commit that referenced this pull request Dec 12, 2025
avodonosov added a commit that referenced this pull request Dec 13, 2025
…e b/s-replace tests us - generate a separate test name for every case; extend the test cases for sequences of type list. re #196
avodonosov added a commit that referenced this pull request Dec 13, 2025
avodonosov pushed a commit that referenced this pull request Dec 13, 2025
…lace, including foreign buffer memory corruption by b/s-replace; also guarantee O(1) working memory usage by s/b-replace and b/s-replace, even if the buffer and sequence sizes are huge. re #196
avodonosov added a commit that referenced this pull request Dec 14, 2025
avodonosov pushed a commit that referenced this pull request Dec 14, 2025
…lace, including foreign buffer memory corruption by b/s-replace; also guarantee O(1) working memory usage by s/b-replace and b/s-replace, even if the buffer and sequence sizes are huge. re #196
avodonosov added a commit that referenced this pull request Dec 14, 2025
…t readtable case (instead of just interning lower case names). re #196
avodonosov added a commit that referenced this pull request Dec 14, 2025
…r version of the pull request by vibs29 (instead of the current constant), so the tests can modify it and it can be controlled dynamically if needed. re #196
avodonosov pushed a commit that referenced this pull request Dec 20, 2025
…ogical case in the old version: writing 1 byte, then 2048 times writing 2049 bytes, would cause 4096 flushes; now that will cause only 2049 flushes). re #196
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants