Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@slarse
Copy link
Contributor

@slarse slarse commented Dec 16, 2025

🧒 Changes

Prevents the publish script from re-tagging.

If there's a desire for even more control over the release build tags, we could skip tagging for release builds. But I personally think that it's better to just let this work as it does and bump the version again if a "bad build" is found.

β˜•οΈ Reasoning

Re-taging is naughty. For an elaborate explanation of why, see the "On Re-Tagging" section of the git-tag man page.

Or just go here: https://git-scm.com/docs/git-tag#_on_re_tagging

🎫 Affected issues

Fixes: #11549

Re-taging is naughty. For an elaborate explanation of why, see the "On
Re-Tagging" section of the git-tag man page.

Or just go here: https://git-scm.com/docs/git-tag#_on_re_tagging
@vercel
Copy link

vercel bot commented Dec 16, 2025

@slarse is attempting to deploy a commit to the GitButler Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Collaborator

@Byron Byron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing this!

To me this seems like the right approach, in principle. What I am a bit anxious about is what it means to fail at the very end of the release process.

Image

For all I know, this will already have overwritten the actual artefacts that some packagers might try to download, but knew under a different name.
And while this fixes source builds as we don't touch Git tags anymore, this doesn't seem like a general solution.

Maybe build-sveltekit as the first step could additionally start with checking that that designated tag doesn't exist yet, and fail if it does? Maybe there is a better way to achieve the same?

On another note: we also have a the problem that release builds aren't released when this workflow finishes, but when a switch is flipped. This allows last-minute changes of mind under the same version number. I think it is fine to skip versions though to keep the tagging workflow simple. If a tag doesn't end up being used, we can also delete it by hand as this is the exceptional case.

CC @krlvi for thoughts and opinions - after all you'd be the one working with this most of the time.

@slarse
Copy link
Contributor Author

slarse commented Dec 17, 2025

For all I know, this will already have overwritten the actual artefacts that some packagers might try to download, but knew under a different name.

Oh, you're absolutely right. Actually wouldn't affect packagers, but the links on the website will be updated, so there will be a mismatch between the tagged release and what's available there.

The only real problem I see is the Notify GitButler API of new release running before the tagging. My understanding is that that's the job that causes the GitButler web backend to pick up on the new artifacts in S3. But... I'm thinking we can just move that to the tag job, as it doesn't actually seem to provide any platform-specific data in the post? It's just the same post over and over regardless of platform, so I'm guessing that post causes the backend to just list currently available S3 artifacts under that release directory or something?

Someone would need to confirm exactly how that works, but just based on what I'm seeing here it seems like it'd be fine to run that POST once per workflow, rather than once per platform. If it actually needs to run once per platform because of something I'm missing something, it'd be a simple thing to put it after the tag job and just include the platform matrix there.

The artifacts being uploaded to S3 isn't an issue IMO, if someone is monitoring that bucket and just grabbing stuff willy-nilly, that's their issue. Unless that someone is GitButler's web backend, then it might be an issue :D

If the tagging fails, we don't want the build to be published to the
website.
Comment on lines +413 to +422
# tell our server to update with the version number
- name: Notify GitButler API of new release
shell: bash
run: |
curl 'https://app.gitbutler.com/api/releases' \
--fail \
--request POST \
--header 'Content-Type: application/json' \
--header 'X-Auth-Token: ${{ secrets.BOT_AUTH_TOKEN }}' \
--data '{"channel":"${{ needs.build-tauri.outputs.channel }}","version":"${{ env.version }}-${{ github.run_number }}","sha":"${{ github.sha }}"}'
Copy link
Contributor Author

@slarse slarse Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving this here seems like a good idea assuming that this is the thing that causes the website to pick up on new artifacts in S3, and that it's sufficient to run it only once per workflow run. Which seems like it should be, there is no platform-specific data in the POST. The backend has to be looking into the S3 bucket to list items, otherwise I can't figure out how it finds the artifacts.

I think this should probably be moved here also to ensure that we don't get "partial releases". Right now, it seems to me like succeeding in building e.g. the macOS build but failing with the Windows build would leave us with a release that just lacks the Windows build.

@slarse
Copy link
Contributor Author

slarse commented Dec 17, 2025

@Byron Feel free to just make a hostile takeover of this PR or just replace it with another one if you want to get stuff done during the day. I won't be available due to day job.

For future reference, that goes for anything I do where it's semi urgent and I can't help myself but do a rushed thing in the middle of the night. I wouldn't be offended :)

@Byron
Copy link
Collaborator

Byron commented Dec 17, 2025

@Byron Feel free to just make a hostile takeover of this PR or just replace it with another one if you want to get stuff done during the day. I won't be available due to day job.

Is there something like a friendly takeover instead πŸ˜…?

Besides, I think you brought up enough very valid questions that make me think this can't continue without divine intervention of someone who can see behind the curtain of the web-backend to see what it does when it's pinged :).
We can of course make it so that CI won't move tags anymore as something that would work without doing so, but it would have to happen before anything else.
Doing nothing is also fine, then we'd have to be sure not to somehow recreate the same patch, something that still seems impossible to me when looking at the way the Job is triggered, unless it's a race, of course.
So I really think a hint from @krlvi is needed here to maybe get some improvement merged.

@slarse
Copy link
Contributor Author

slarse commented Dec 17, 2025

Is there something like a friendly takeover instead πŸ˜…?

I don't think so. I'll have to check with HR.

Besides, I think you brought up enough very valid questions that make me think this can't continue without divine intervention of someone who can see behind the curtain of the web-backend to see what it does when it's pinged :)

Bah. What's the worst that could happen?

Oh, right... πŸ˜… oh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update the release script to not create tags on build

2 participants