Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

Tags: dapr/dapr

Tags

v1.16.7-rc.1

Toggle v1.16.7-rc.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[1.16] Fix HTTPEndpoint ClientTLS parsing of only root CA (#9296)

Problem

Attempting to use a HTTPEndpoint with on a root CA client TLS block defined would result in a certificate parse error.

Impact

Applications using the HTTPEndpoint component with a root CA defined in the ClientTLS block would fail to start without also defining a valid client TLS certificate and key.

Root Cause

In Kubernetes mode, the HTTPEndpoint client TLS block was incorrectly defaulting the certificate and key fields to empty strings, causing the certificate parsing to fail when only a root CA was defined.

Solution

The HTTPEndpoint component has been updated to correctly handle cases where only a root CA is defined in the ClientTLS block, allowing successful initialization without requiring client certificate and key.

Signed-off-by: joshvanl <[email protected]>

v1.17.0-rc.1

Toggle v1.17.0-rc.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Sync release 1.16 branch to master (#9273)

* Blocked Placement dissemination with high Scheduler dataset (#9129)

Problem

Disseminations would hang for long periods of time when the Scheduler dataset was large.

Impact

Dissemination could take up to hours to complete, causing reminders to not be delivered for a long period of time.

Root Cause

The reminder migration of state store to scheduler reminders does a full decoded scan of the Scheduler database, which would take a long time if there were many entries. During this time the dissemination would be blocked.

Solution

Limit the maximum time spent doing the migration to 3 seconds.
Expose a new `global.reminders.skipMigration="true"` helm chart value which will skip the migration entirely.

Signed-off-by: joshvanl <[email protected]>

* release notes

Signed-off-by: Cassandra Coyle <[email protected]>

* feat: update components-contrib. go-avro codec fix (#9141)

Signed-off-by: nelson.parente <[email protected]>

* basic fix and tests

Signed-off-by: Cassandra Coyle <[email protected]>

* fix CI

Signed-off-by: Cassandra Coyle <[email protected]>

* release notes

Signed-off-by: Cassandra Coyle <[email protected]>

* read APP_API_TOKEN once, plumb thru both gRPC & HTTP, remove per-request env reads

Signed-off-by: Cassandra Coyle <[email protected]>

* Updated components contrib, and added release notes (#9153)

Signed-off-by: Albert Callarisa <[email protected]>

* [1.16] Prevent infinite loop when workflow state is corrupted or destroyed (#9158)

* Prevent infinite loop when workflow state is corrupted or destroyed

Problem

Dapr workflows could enter an infinite reminder loop when the workflow state in the actor state store is corrupted or destroyed.

Impact

Dapr workflows would enter an infinite loop of reminder calls.

Root Cause

When a workflow reminder is triggered, the workflow state is loaded from the actor state store. If the state is corrupted or destroyed, the workflow would not be able to progress and would keep re-triggering the same reminder indefinitely.

Solution

Do not retry the reminder if the workflow state cannot be loaded, and instead log an error and exit the workflow execution.

Signed-off-by: joshvanl <[email protected]>

* lint

Signed-off-by: joshvanl <[email protected]>

* Add eventually for metrics

Signed-off-by: joshvanl <[email protected]>

---------

Signed-off-by: joshvanl <[email protected]>

* add root cause and fix link to section (#9206)

Signed-off-by: Cassandra Coyle <[email protected]>

* [1.16] Perf Charts (#9209)

* charts

Signed-off-by: Cassandra Coyle <[email protected]>

* add readmes

Signed-off-by: Cassandra Coyle <[email protected]>

* add charts.go

Signed-off-by: Cassandra Coyle <[email protected]>

* add go.mod

Signed-off-by: Cassandra Coyle <[email protected]>

* update charts with 1.16 perf numbers

Signed-off-by: Cassandra Coyle <[email protected]>

* tweak y axis for data volume charts

Signed-off-by: Cassandra Coyle <[email protected]>

* fix bullet

Signed-off-by: Cassandra Coyle <[email protected]>

* add service invocation, will rm from docs bc this is now the src of truth

Signed-off-by: Cassandra Coyle <[email protected]>

* add actors perf results

Signed-off-by: Cassandra Coyle <[email protected]>

---------

Signed-off-by: Cassandra Coyle <[email protected]>
Co-authored-by: Josh van Leeuwen <[email protected]>

* [1.16] Workflow Charts readme (#9229)

* one readme with all charts

Signed-off-by: Cassandra Coyle <[email protected]>

* add chart titles to markdown for clarity

Signed-off-by: Cassandra Coyle <[email protected]>

---------

Signed-off-by: Cassandra Coyle <[email protected]>

* [1.16] Deleted Jobs in all prefix matching deleted Namespaces (#9232)

* Deleted Jobs in all prefix matching deleted Namespaces

Problem

Deleting a namespace in Kubernetes will delete all the associated jobs in that namespace.
If there are any other namespaces with a name which has a prefix matching the deleted namespace, the jobs in those namespaces will also be deleted (i.e. deleting namespace "test" will also delete jobs in namespace "test-1" or "test-abc").

Impact

Deleting a namespace will delete jobs in other namespaces with prefix matching the deleted namespace.

Root Cause

Prefix logic did not terminate the prefix match with an exact match so that deleting a namespace would delete jobs in other namespaces with prefix matching the deleted namespace.

Solution

The prefix logic has been updated to ensure that only jobs in the exact deleted namespace are deleted.

Signed-off-by: joshvanl <[email protected]>

* Lint

Signed-off-by: joshvanl <[email protected]>

* Fix unit tests

Signed-off-by: joshvanl <[email protected]>

* Point go-etcd-cron to joshvanl fork diagridio/go-etcd-cron#104

Signed-off-by: joshvanl <[email protected]>

* Update github.com/diagridio/go-etcd-cron to v0.9.2

Signed-off-by: joshvanl <[email protected]>

---------

Signed-off-by: joshvanl <[email protected]>

* rm skip migration bc it should not be present in master and was already removed

Signed-off-by: Cassandra Coyle <[email protected]>

* make modtidy-all

Signed-off-by: Cassandra Coyle <[email protected]>

---------

Signed-off-by: joshvanl <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: nelson.parente <[email protected]>
Signed-off-by: Albert Callarisa <[email protected]>
Co-authored-by: Josh van Leeuwen <[email protected]>
Co-authored-by: Nelson Parente <[email protected]>
Co-authored-by: Albert Callarisa <[email protected]>
Co-authored-by: Dapr Bot <[email protected]>

v1.16.6

Toggle v1.16.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[1.16] Update UID check to check only root (#9275)

* Update UID check to check only root

Problem

By default when running in Kubernetes, Dapr had a check where if the process was running with a UID or GID which was not 65532, the program would exit with an error.

Impact

This causes issues for users running in Openshift/OKD environments where the platform assigns random UIDs to pods for security reasons.

Root Cause

The check was too strict, as it did not account for scenarios where only the root user (UID 0) should be allowed, while other UIDs should be blocked.

Solution

Only check for root (UID 0) and allow all other UIDs to run Dapr.

Fixes #9132

Signed-off-by: joshvanl <[email protected]>

* lint

Signed-off-by: joshvanl <[email protected]>

* Fix panic

Signed-off-by: joshvanl <[email protected]>

---------

Signed-off-by: joshvanl <[email protected]>

v1.16.5

Toggle v1.16.5's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[1.16] Propagation of traceparent in grpc pubsub (#9237)

* chore: Add traceparent to grpc output metadata

Signed-off-by: Javier Aliaga <[email protected]>

* chore: Update component contrib 1.16.5

Signed-off-by: Javier Aliaga <[email protected]>

---------

Signed-off-by: Javier Aliaga <[email protected]>

v1.16.4

Toggle v1.16.4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[1.16] Deleted Jobs in all prefix matching deleted Namespaces (#9232)

* Deleted Jobs in all prefix matching deleted Namespaces

Problem

Deleting a namespace in Kubernetes will delete all the associated jobs in that namespace.
If there are any other namespaces with a name which has a prefix matching the deleted namespace, the jobs in those namespaces will also be deleted (i.e. deleting namespace "test" will also delete jobs in namespace "test-1" or "test-abc").

Impact

Deleting a namespace will delete jobs in other namespaces with prefix matching the deleted namespace.

Root Cause

Prefix logic did not terminate the prefix match with an exact match so that deleting a namespace would delete jobs in other namespaces with prefix matching the deleted namespace.

Solution

The prefix logic has been updated to ensure that only jobs in the exact deleted namespace are deleted.

Signed-off-by: joshvanl <[email protected]>

* Lint

Signed-off-by: joshvanl <[email protected]>

* Fix unit tests

Signed-off-by: joshvanl <[email protected]>

* Point go-etcd-cron to joshvanl fork diagridio/go-etcd-cron#104

Signed-off-by: joshvanl <[email protected]>

* Update github.com/diagridio/go-etcd-cron to v0.9.2

Signed-off-by: joshvanl <[email protected]>

---------

Signed-off-by: joshvanl <[email protected]>

v1.16.3

Toggle v1.16.3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
chore: Update component-contrib dep (#9204)

Signed-off-by: Javier Aliaga <[email protected]>
Co-authored-by: Cassie Coyle <[email protected]>

v1.16.2

Toggle v1.16.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
[1.16] Prevent infinite loop when workflow state is corrupted or dest…

…royed (#9158)

* Prevent infinite loop when workflow state is corrupted or destroyed

Problem

Dapr workflows could enter an infinite reminder loop when the workflow state in the actor state store is corrupted or destroyed.

Impact

Dapr workflows would enter an infinite loop of reminder calls.

Root Cause

When a workflow reminder is triggered, the workflow state is loaded from the actor state store. If the state is corrupted or destroyed, the workflow would not be able to progress and would keep re-triggering the same reminder indefinitely.

Solution

Do not retry the reminder if the workflow state cannot be loaded, and instead log an error and exit the workflow execution.

Signed-off-by: joshvanl <[email protected]>

* lint

Signed-off-by: joshvanl <[email protected]>

* Add eventually for metrics

Signed-off-by: joshvanl <[email protected]>

---------

Signed-off-by: joshvanl <[email protected]>

v1.15.13

Toggle v1.15.13's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Updated components contrib, and added release notes (#9152)

Signed-off-by: Albert Callarisa <[email protected]>

v1.16.2-rc.2

Toggle v1.16.2-rc.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Blocked Placement dissemination with high Scheduler dataset (#9129)

Problem

Disseminations would hang for long periods of time when the Scheduler dataset was large.

Impact

Dissemination could take up to hours to complete, causing reminders to not be delivered for a long period of time.

Root Cause

The reminder migration of state store to scheduler reminders does a full decoded scan of the Scheduler database, which would take a long time if there were many entries. During this time the dissemination would be blocked.

Solution

Limit the maximum time spent doing the migration to 3 seconds.
Expose a new `global.reminders.skipMigration="true"` helm chart value which will skip the migration entirely.

Signed-off-by: joshvanl <[email protected]>

v1.16.2-rc.1

Toggle v1.16.2-rc.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Merge pull request #9123 from JoshVanL/fix-placement-double-lock-1.16

[1.16] Fix placement client double lock