Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Ability to run internal web server via unix socket #270

@MrRubberDucky

Description

@MrRubberDucky

Describe your idea for an enhancement:

I've moved away from Pocket ID and I greatly miss the ability to be able to just expose the internal web app via Unix domain socket (wikipedia). This completely goes around the TCP networking stack by just going directly through the kernel, giving a speed benefit and also allowing container to run without any network attached to it, which in turn enhances security a bit since container can't reach anything externally, or internally. It also has extra benefits for users running rootless podman and docker as it completely skips over the usually slow rootless networking layer and allows for more advanced usage such as socket activation (eriksjolund's github repo) (That however needs to be coded into the project to be reliable I think?)

The way Pocket ID does it currently for example, is that they allow you to specify a custom path to the unix socket via UNIX_SOCKET env - this in turn completely disables the TCP portion of the app (Not 100% sure on this one but the documentation itself says "When set, the server will use a Unix socket instead of TCP"), and allows you to set permissions on the socket via UNIX_SOCKET_MODE env ex. 0755 (-rwx,r-x,r-x)

If this is already possible then my sincere apologies for being blind but I wasn't able to find it mentioned anywhere in the documentation.

Describe alternatives you've considered:

I'm currently just running the main container under MACVLAN to expose it to my other MACVLAN container with extra internal bridge network for Postgres communication between VoidAuth and Postgres. Nothing problematic, it's just me segmenting two things with two seperate networks when maybe it could be one, or even none. (DB_HOST env would need postgres socket support too I suppose if none is to be expected)

Additional context:

I'm not 100% knowledgable when it comes to how complex it is to have something like this implemented in a project, or how unix domain sockets work in a capacity I feel comfortable enough to go in detail about it, so if it's not feasible then that's fine too.

Thanks for creating and maintaining this project!

Unrelated question: Is there a reason why image runs as 0:0 user by default? It doesn't seem to have any issues with running as any other user - as long as configuration directory is mounted. I run mine with user 1100:1100 and with DropCapability=all along with NoNewPrivileges=true and it likes it here. I can see it's for some 'backwards compatibility' but that doesn't really tell much. Is it perchance for older images that used to run as root? 🤔

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions