-
Notifications
You must be signed in to change notification settings - Fork 539
Add open_url
/close_url
notebook APIs
#10602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Web viewer built successfully. If applicable, you should also test it:
Note: This comment is updated whenever you push a commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the workflow in the PR description. I see it now
If we do
url_a = <>
url_b = <>
viewer = Viewer(<>)
viewer.open_url(url_a)
# wait to show a
time.sleep(<>)
viewer.open_url(url_b)
# wait to show b
viewer.open_url(url_a)
Does this do the right thing by showing a, then b, then a? I don't know the next level deeper enough to know how opening the same url twice behaves. If we can have a bunch of things open my first attempt in updating my code would be to open ALL my urls, then call open a second time to bring a preloaded url to the main view.
There are no uniqueness constraints on data sources, so this will open A, then B, then A again, resulting in 3 open recordings. It will fetch A's data twice.
We don't have a way to activate a recording by its URL. If you want the exact same behavior as before, you'd do viewer = Viewer()
current_url = None
# ...
current_url = dataset.partition_url(...)
viewer.open_url(current_url)
# ... later
viewer.close_url(current_url)
current_url = dataset.partition_url(...)
viewer.open_url(current_url) But there should be no harm in leaving the older partitions open, no? So you'd have this instead: viewer = Viewer()
viewer.open_url(dataset.partition_url(...))
# later, open another one
viewer.open_url(dataset.partition_url(...)) |
cc @ntjohnson1 as this replaces
set_active_partition_url
- do we have to update some notebooks?Also reworked the
_data_queue
+_table_queue
into a single_event_queue
which all calls tosend
go through now. This means not only rrd/table data, but all events will be buffered in the same way, and in the right order relative to each other.