-
Notifications
You must be signed in to change notification settings - Fork 540
LMDeploy Distserve #3304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LMDeploy Distserve #3304
Conversation
636fb5b
to
94eee2b
Compare
…pus to ray.init for run in dlc
lmdeploy/cli/serve.py
Outdated
@@ -1,5 +1,5 @@ | |||
# Copyright (c) OpenMMLab. All rights reserved. | |||
|
|||
from lmdeploy.disagg.messages import EngineRole, MigrationBackend, MigrationTransportProtocol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can put this after line 307 to avoid unnecessary importing time
1. [PD Connection more efficiently][High Priority] In DSV3 DP + EP condition, we need to concurrently construct prefill_dp_size (for exampe 32) * decode_dp_size(for example 144) links. We add a function `pd_consolidation_multi_thread` to do this. However, we need to check if the construction operation is thread safe. 2. [Combine with proxy] Maybe we should save conn_config to avoid repeatly reconnection of PD Link. 3. [PD Control Plane][High Priority] For DP + EP, we need to reconstruct DisaggEngineConfig to record more information (e.g. dp_idx, tp_idx ...) 4. [Combine with router][Important] How to perform PD Load Balance in disaggregated LLM Serving. 5. [PD Data Plane] adapt to Open Source KVCache manager like Mooncake, infiniStore or NiXL and more transport media.
…rve-micro-batch
lmdeploy/pytorch/engine/engine.py
Outdated
@@ -353,6 +363,9 @@ def __init__(self, | |||
self._start_loop() | |||
self._loop_main = None | |||
|
|||
# for migration loop management | |||
self.migration_event = asyncio.Event() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The engine is lazy started since we might not have the event loop when creating engine.
I don't know if it is safe to initialize asyncio.Event
here.
if resp.type == ResponseType.SUCCESS: | ||
token_ids = resp.data['token_ids'].tolist() | ||
token_ids = resp.data['token_ids'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EngineInstance would output ndarray instead of list[int], is it acceptable @lvhan028 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's not
What the lmdeploy-distserve is included:
State of lmdeploy-distserve:
Next Step
Initialization
The PD Consolidation process outlines the sequential steps for establishing peer-to-peer (P2P) connections between system components. The process begins with the Router , which acts as the central orchestrator. First, the Router initiates the connection setup by sending a p2p_initialize message to both the Prefill Server and the Decode Server . This ensures that all necessary components are prepared for the subsequent connection phase.
Once the initialization phase is complete for both the Prefill Server and the Decode Server , the Router proceeds to establish the actual P2P connections. It sends a p2p_connect message to the Prefill Server to finalize the connection, followed by another p2p_connect message to the Decode Server . This systematic approach ensures that all components are properly initialized before any connections are established, forming a cohesive network during the system's startup phase.
Control Plane
The diagram illustrates the workflow and interactions between various components involved in the system's prefill and decode processes. This process is designed to manage tasks efficiently, ensuring smooth operation and scalability.
Prefill Process:
The Prefill Server initiates the prefill process by sending a Prefill Message to the Prefill Engine .
The Prefill Engine processes the request and generates an
EngineOutput
, which includes details such asFirstToken
and CahceBlockIds
.The Prefill Scheduler receives the output from the Prefill Engine and manages task scheduling. Tasks are placed into a Waiting Queue with a status of
Status.WAITING
.Once ready, the tasks are forwarded to the Forward Executor , which processes them with a status of
Status.RUNNING
. The status will be converted toStatus.ToBeMigrated
and will be free when decode engine migration done.Decode Process:
The Decode Server sends requests to the Decode Engine , which processes the input and generates an
EngineOutput
. This output may include details likeGenToken
. The Decode Scheduler manages the decoded tasks and places them into a Migration Queue with a status ofStatus.WaitingMigration
. The Migration Executor processes these tasks, transitioning their status toStatus.Running
. Completed tasks are then sent back to the Forward Executor for further processing (Prefill Enginecache_free
).Key Features
This structured approach enables seamless coordination between components, facilitating efficient task execution and system control within the Control Plane .
Data Plane
The diagram illustrates the workflow and interactions between key components responsible for managing cache operations, migration, and load balancing. This process is designed to optimize data handling and ensure efficient resource utilization.
Prefill CacheEngine:
The Prefill CacheEngine handles caching operations for prefill tasks. It interacts with the
MigrationBackend.Store
to store cached data, which can be migrated or loaded as needed.Decode CacheEngine:
The Decode CacheEngine manages caching operations for decode tasks. It interacts with the MigrationBackend.Load to retrieve cached data when required.
Optional Store Component:
An optional Store component is included, which can be utilized for additional storage needs.This component may interact with the
MigrationBackend.Store
to manage persistent storage or backup mechanisms.Migration Operations:
Both the Prefill CacheEngine and Decode CacheEngine utilize the
MigrationBackend.Migrate
functionality to migrate cached data as necessary. This ensures that cached data can be efficiently moved between different storage locations or systems, maintaining data consistency and availability.Key Features
This structured approach enables seamless coordination between components, facilitating efficient data handling and system control within the Data Plane .
How to build
pip install dlslime==0.0.1.post2 pip install -v -e .
How to Run
Step 1. Start Prefill Engine
Step 2. Start Decode Engine
Step 3. Start Router