-
Notifications
You must be signed in to change notification settings - Fork 703
internal/dag: extract DAG processors from the main builder #2847
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
jpeach
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this is a useful step towards breaking down the monolith. The main improvement I'd like is to remove the SetBuilder method.
Also, it'd be nice for the Processors to have a read-only view on the KubernetesCache.
Agreed; This is closely related to #2683, where I'd like to be able to shuffle the controller-runtime cache as the Source.
look at possibly moving the Processors into their own package(s)
I'd support this so that we can enforce clean interface usage.
decide if we want to have Processors operate directly on a DAG rather than using a Builder/BuildContext
I'm pretty sure it's possible to just have processors make passes over the DAG and not have a separate build context. That would be my preferred solution.
360de45 to
e70f244
Compare
Would you move the I think there are some nice things about keeping the Builder/BuildContext and the final DAG separate - you can have a different API for builders of the DAG vs. consumers of the DAG, and also you know that a DAG object is always a "final" representation rather than in the process of being built. |
stevesloka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
jpeach
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great; a few optional suggestions.
| // if there are virtual hosts and secure virtual hosts already | ||
| // defined in the builder. | ||
| func (p *ListenerProcessor) Run(builder *Builder) { | ||
| p.builder = builder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be safe, add
defer func() {p.builder = nil}()| // Run translates Ingresses into DAG objects and | ||
| // adds them to the DAG builder. | ||
| func (p *IngressProcessor) Run(builder *Builder) { | ||
| p.builder = builder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be safe, add:
defer func() {p.builder = nil}()| // adds them to the DAG builder. | ||
| func (p *HTTPProxyProcessor) Run(builder *Builder) { | ||
| p.builder = builder | ||
| p.orphaned = make(map[types.NamespacedName]bool, len(p.orphaned)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be safe, add:
defer func(){
p.builder = nil
p.orphaned = nil
}()|
|
||
| func (p *pluggableProcessor) Run(builder *Builder) { | ||
| p.runFunc(builder) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've generally found it's pretty useful to add a function adaptor at the API layer, eg.
type BuildProcessorFunc func(*Builder)
func (b BuildProcessorFunc) Build(builder *Builder) {
if b != nil {
b(builder)
}
}Maybe add this in a subsequent PR?
I think that some go into the DAG and some get subsumed by the resource cache.
I think that the DAG-only approach allows builders to be more independent. You can drop in a builder that manipulates the DAG without having to make corresponding changes to a build context. I suspect that in the medium term, this makes the API more robust. |
youngnick
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM too.
Extracts the logic specific to processing HTTPProxy, Ingress APIs into their own processors that are invoked by the DAG builder. Also extracts a listener processor to add HTTP/HTTPS listeners after the other processors have run. Signed-off-by: Steve Kriss <[email protected]>
e70f244 to
499d9d8
Compare
Codecov Report
@@ Coverage Diff @@
## main #2847 +/- ##
==========================================
+ Coverage 76.44% 76.46% +0.01%
==========================================
Files 74 77 +3
Lines 5804 5833 +29
==========================================
+ Hits 4437 4460 +23
- Misses 1272 1278 +6
Partials 95 95
|
Extracts the logic specific to processing HTTPProxy, Ingress
APIs into their own processors that are invoked by the DAG
builder. Also extracts a listener processor to add HTTP/HTTPS
listeners after the other processors have run.
Signed-off-by: Steve Kriss [email protected]
updates #2226
Heavily inspired by all the work @stevesloka has done to date, and previous conversations between the maintainers.
I think there's still some more refactoring that can be done here, so this isn't necessarily final state, but it does hit the goal of being able to separate out processors for different APIs into their own types and turn them on/off at runtime, and I don't want this PR to get too large.
A few areas I have in mind for followup:
Builder, theProcessors, theKubernetesCache. Specifically, I'd like theProcessorsnot to depend on a fullBuilder, but a subset of it - something like aBuildContext(or we could move to using theDAGhere). Also, it'd be nice for theProcessorsto have a read-only view on theKubernetesCache.Processorsinto their own package(s)Processorsoperate directly on aDAGrather than using aBuilder/BuildContext