Thanks to visit codestin.com
Credit goes to github.com

Skip to content

LssMaster accumulates data with external LSS master #577

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sveinse opened this issue Apr 27, 2025 · 2 comments
Open

LssMaster accumulates data with external LSS master #577

sveinse opened this issue Apr 27, 2025 · 2 comments

Comments

@sveinse
Copy link
Collaborator

sveinse commented Apr 27, 2025

As #574 was closed there is a specific issue with LssMaster in the current implementation:

If this library is together with another LSS master present on the physical bus, the input queue of LssMaster will be filled up with CAN messages, eventually leading to memory overload. It is only emptied when the LssMaster is actively used to send LSS commands. This is because Network creates an LssMaster instance and activates the subscription to the LSS messages.

self.lss = LssMaster()
self.lss.network = self
self.subscribe(self.lss.LSS_RX_COBID, self.lss.on_message_received)

canopen/canopen/lss.py

Lines 396 to 397 in f1a71da

def on_message_received(self, can_id, data, timestamp):
self.responses.put(bytes(data))

I think the simplest solution would be not to activate the subscription callback until LssMaster.__send_command() is called. It is also draining the queue at this point:

canopen/canopen/lss.py

Lines 377 to 379 in f1a71da

if not self.responses.empty():
logger.info("There were unexpected messages in the queue")
self.responses = queue.Queue()

@acolomb
Copy link
Member

acolomb commented Apr 28, 2025

Some thoughts on this:

  1. LSS messages are usually very rare on a CANopen bus.
  2. The library might be used as part of a commissioning or test equipment, where LSS is one of the central elements needed. Even if just listening in such a scenario, the Queue would grow perpetually, as you discovered. That's not good.
  3. Activating the subscription after the first __send_command() does not fix the problem, but only postpones the effect. After using one LSS command as master, we would be in just the same situation, filling up with each unrelated LSS frame onwards.
  4. A proper fix would be to store that a command has been sent and only act upon a response if there is an active command. That could be done by adding a locally scoped closure as subscriber in __send_command(), but it will be hard to guarantee that it will be unsubscribed again.
  5. Maybe a context manager could help with a temporary subscription?
  6. Any other LSS frames might be of interest, for example to get logged. For that case, a permanent subscription in the LssMaster is sensible, but then the callback would need to handle a message right away instead of simply piling them up.
  7. Overall this code is not very well designed. A better pattern for managing the notification is needed. Hopefully something that's also compatible with using this method in an asyncio context?

@sveinse
Copy link
Collaborator Author

sveinse commented Apr 28, 2025

I'm happy to postpone this issue until we have a structure for writing unified implementations that are able to work with either regular blocking or asyncio. Then we can refactor the design to something "proper".

I'll come back later to clarify what I mean by "unified implementations". I'm working on a general discussion about it. For this specific issue, the practical implications is that we don't fix this until after asyncio is in place.

I like the context manager idea for subscriptions, btw. That could be an elegant manner to handle other protocols as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants