-
Notifications
You must be signed in to change notification settings - Fork 83
SSL problems with cache-refreshes on a Symfony proxy-client #283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
i mostly work with varnish, which is HTTP only anyways. HTTPS termination happens on a load balancer, and we usually have direct access to the varnish server. can you make your server accept HTTP traffic from localhost? if not, i am indeed not sure how this can safely be done. otherwise maybe you could set curl options to not validate the certificate - if the routing that your web server is using is spoofed, you have a big problem anyways. and you only send invalidation requests, which at the uttermost could reveil non-public URLs to an attacker - but that attacker could likely do much worse things if they manage to break your routing. @lolautruche afaik you use the symfony cache more often, do you have input to this question? regarding documentation, we could add some notes about HTTPS communication to http://foshttpcache.readthedocs.org/en/latest/proxy-clients.html once we figure out here what the right way to do this is. |
@dbu Thanks, we implemented your suggestion to make the HTTPS server accept HTTP over localhost. This seems to bring us one step further, as the '51: Unable to communicate securely ...' error has gone, and a wget on 127.0.0.1 now shows results. Unfortunately, the cache still does not get refreshed properly. A HTTP test-server, with exactly the same setup, refreshes just fine. The 'proxy_client.symfony.base_url' config setting seems like the obvious culprit. Maybe this is for a new thread. But as it is related to SSL, maybe it better remain here. |
looking at https://github.com/FriendsOfSymfony/FOSHttpCache/blob/master/src/SymfonyCache/PurgeSubscriber.php#L103 i think indeed you are correct. $request->getUri() does include the scheme, and the default store does take the scheme into account. @lolautruche @joelwurtz would one of you have an idea what we should do here? should we allow some extra header to specify https? or should we always do a second call to purge the other scheme variant as well? |
@wimklerkx until we figure this out, your workaround would be to extend the PurgeSubscriber and overwrite handlePurge with a version that replaces http by https for doing the purge. and then use that subscriber instead of the default, when creating your symfony cache kernel. |
About SSL, the first bug may comes from SNI validation. AFAIK when doing ssl request there is a clear header (not encrypted) which indicates the host, it's used to know which certificate to use (useful when dealing with multiple certificates). However if you do a request against 127.0.0.1 and not a domain name, there is no information about the requested host, then it will fail as no certificate can be found against 127.0.0.1. There is multiple workaround for that : indicate the real domain name (and force ip matching in /etc/hosts if it cannot resolve), or indicating the peer_name options in the curl context of guzzle (don't really if it's possible maybe @dbu knows more about it) On the other subject, IMO i think the cache key must not be written with the scheme of the request as it's unrelevant, in the meantime i think clearing both scheme should be OK (but final fix should be registering the cache without the scheme) |
@joelwurtz with invalidation requests using a domain name only works if there is only one cache server. the idea of using the IP and a Host header is that we send the request to each server. the configuration is like this: we send a request to an IP, but send the HOST based on the base_url. i assume the problem is that the curl client validates the certificate based on the IP that was used for the request rather than the HOST header we sent along. is that correct @wimklerkx? it looks like a way around this would be [http://serverfault.com/questions/443949/how-to-test-a-https-url-with-a-given-ip-address](to force the IP on curl) which translates to CURLOPT_RESOLVE in php . however, with the upcoming version 2, we abstract from guzzle and hence would need to find a way to do that with Httplug... i created php-http/plugins#60 to discuss that. anyways, i think for now: if you have only one host, do not use the IP but the DNS name instead so that you get the correct certificate. otherwise, i would configure the guzzle client to ignore certificate validation and use https for the requests until we found a solution. i wonder if we should change the purge call to purge both http and https, wdyt @ddeboer ? it feels like an odd idea to only invalidate one but not the other. with varnish, this distinction can not exist, as ssl termination happens before and varnish only looks at domain and path but not the protocol for the cache entries. |
No need for CURLOPT_RESOLVE, only setting the peer_name (with the domain name as the value) option on ssl context options (http://php.net/manual/fr/context.ssl.php) should be enough. |
@joelwurtz If IP here means the value set in |
ok, then at least we understand the problem ;-) do you have multiple web servers @wimklerkx or only one? could you use the domain name instead of the IP in the servers setting? |
@dbu |
if you expect significant load on your system, i would recommend to consider using varnish or another reverse proxy. the symfony reverse proxy is kind of a "poor mans reverse proxy" for when you can't install additional software on shared servers. if you have several servers, you probably can also install custom things on them. on cache hits, varnish would be much more efficient and a lot faster, allowing you to scale better. that said, the idea joel and i put up was using CURLOPT_RESOLVE or peer_name in the ssl context for each of the target servers. then you would be using the domain name, but send to the correct ip directly. you might need to extend some of the FOSHttpCache classes to get this to work however. |
@lolautruche as you care about the symfony cache implementation as well: what do you think about the issue that the cache kernel differentiates cache entries between http and https? invalidating only invalidates the protocol the invalidation is sent in. should it invalidate both? i see no use case of not invalidating both... |
AFAICT I don't see any use case for not invalidating both either, so I think we can safely clear cache ignoring the scheme. However, it is critical to keep the host name, since eZ can have multiple hosts configured with the same URI, but different design variations. WDYT @bdunogier @andrerom? |
yes, we definitely want to keep respecting the host name. the tricky bit
is that the cache entries use the full uri including http(s) for the
hash. so i think we need to trigger invalidation twice in the
invalidation listener, probably with a string replace on the uri. should
we add an option for this or just hardcode it? if i know i only ever get
one of the two, its a waste of time looking for a cache entry that does
not exist.
|
yes scheme is not important on our side (for matching), so don't think there should be any problem for us besides checking the parts that override/enhance this on our side. |
@wimklerkx do you want to do a pull request to change the cache invalidation listener to invalidate both http and https? i think we should add an option like "strict_scheme" to not try to invalidate both but default to invalidate both. |
Well, it's technically possible to define a siteaccess matcher using the scheme... It's not present by default, but then this should definitely be documented if we go this way.
As it would be possible to use the scheme to match a siteaccess in eZ, I suggest to use an option then. By default, we would ignore the scheme when purging. |
@dbu Thanks for the suggestion. Such a solution is the ultimate goal indeed. We are using the Symphony Cache now to iron out any trouble, like this https problem. Then when all works fine, migrate to Varnish. |
Tried some approaches refreshing both http and https cache entries, but no solution as of yet: Using FOS\HttpCacheBundle\CacheManager::refreshPath() directly, with explicit http and https paths does not refresh the https path. Adding the scheme to the Guzzle-request like this:
(FOS\HttpCache\ProxyClient\AbstractProxyClient::sendRequests()) also does not help. It'll be a few days before there's time to dig deeper; to be continued |
Eventually we followed @dbu advice and switched to using Varnish as proxy instead of SymfonyCache. HTTP cache refreshes were implemented following the FOSHttpCacheBundle docs. This tutorial helped have SSL terminate in Varnish, using nginx: So the issue with using SymfonyCache with SSL still stands, but our problem was solved |
i think instead of sending two requests, we should make https://github.com/FriendsOfSymfony/FOSHttpCache/blob/master/src/SymfonyCache/PurgeSubscriber.php#L103 do the purge with both http and https unless disabled with a new option. that will take less load on the system and is simple to implement. we then should document that people force http for talking to the cache if they use the IP over a named host, to avoid ssl certificate mismatch. |
@wimklerkx would you want to work on that? i can understand if not, if you use varnish now... |
actually should be fixed in symfony itself: symfony/symfony#21582 |
i also created a workaround on our end that we can put into 2.0 if the fix is not quickly accepted in symfony core. |
Locally, and on a http server, the bundle works all fine refreshing cached routes on a Symphony proxy-client.
When porting the project to a https server though, cache-refreshes run into the following error:
Request to caching proxy at 127.0.0.1 failed with message "[curl] 51: Unable to communicate securely with peer: requested domain name does not match the server's certificate.
Fiddling with the 'proxy_client.symfony.servers' config has not resulted in a working solution.
Solutions for similar problems (with wget, curl, guzzle) found around the internet are all pretty cumbersome, often compromising security.
As this is a common use case, I suspect there is a straightforward solution. But just can't find any mention of it in the documentation.
The text was updated successfully, but these errors were encountered: