-
Notifications
You must be signed in to change notification settings - Fork 291
Add the AMPGO algorithm #466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
the |
|
@reneeotten Thanks!!! I'll have to look into this in more detail, but it looks like very nice work. I think I agree with you on |
|
@newville I'll wait a bit to see if you (and hopefully others!) have more comments and then I'll go over it once more and try to finish it up. We can reach out to the author of the Python implementation, but it is as done as described in this paper. I agree that it's not ideal to have a convergence criteria other than going through a fixed number of iterations/function evaluations. But I don't think there is much one can do here since there is no way to determine if you have reached a global minimum. In fact, that's also how |
|
@reneeotten I've read the paper more carefully and the code too, and it seems that the fixed number of iterations (that is, not depending on |
- add AMPGO code from Andrea Gavana
- remove main() function - add module docstring - remove OPENOPT solvers - add/update function docstrings - update "disp" statements (now boolean) - add comments to the code
Do not pass "bounds" to the underlying minimizer anymore; use the "lmfit" way of making sure that parameters stay within bounds.
When fitting a function (i.e., in the test) AMPGO returns an array instead of a scalar for the function value.
…uations.
The optimization will stop after the specified number of iterations
("totaliter").
|
@newville okay, I have update this PR so that it will merge cleanly. It uses now the It might be good to get some release candidate out with these new options and solicit some real-world testing to resolve some issues before the actual release. Also, I am pretty sure with some of my last PRs not all the documentation/docstrings were updated perfectly, so I would like to fix that as well. |
|
slightly relaxed the |
|
@reneeotten Great! Will merge now. Unless there are objections, I'll tag this (perhaps not until tomorrow) as 0.9.10rc1, aiming for 0.9.10 by the end of May. I think we may are very close to ready for 1.0.0, and can start to talk about what is left for that... |
This PR adds the AMPGO algorithm to lmfit as discsussed in Issue #440. It follows the original Python implementation by Andrea Gavana - I have only updated/simplified the code here and there, without changing the actual outcome. It's not ready yet, but I would appreciate some comments/suggestions.
I am still not completely sure about the benchmark comparisons as shown on the website regarding the number of function evaluations, convergence and such. For example, the ampgo function parameter
maxfunevalsis used asmaxiterin the local solver, which isn't the same as function evaluations. Also, the only convergence criteria in the ampgo code isbest_f < fmin + glbtol, otherwise the code will just continue until it reachestotaliterormaxfunevals. From the code it seems he might have used the knownfminvalue in the benchmarks to determine when to stop, which (I think) isn't a completely fair comparison todifferential_evolutionandbasinhoppingas such convergence criteria doesn't exist there (i.e., they'll probably need more function evaluations to figure out that there is no improvement anymore and decide to stop). But I don't know how big the difference would be... it might not matter much for the overall picture.Anyhow, I did run the scipy global benchmarks with the default settings for the three global optimizers (without using the
fminvalue for a more fair comparison). Providing that I did this correctly and looking only at the 100%-success cases for all three optimizers it definitely finds the solution in less iterations. However, there are also many cases where it doesn't find the correct solutions and this might have to do with the fact that if you don't specifymaxfunevalsit will usemax(100, 10*npar), which means for most test functions a maximum of 100 function iterations (and typically a bit more evaluations). From the graph on the above-mentioned website it seems that increasing this number a bit more will improve the situation quite a bit, so I am playing around with this a bit for the scipy benchmarks to see if there is a sensible default value for this.Sorry for the amount of text.... but in short a few questions:
fminparameter and associated convergence criteria from the code (in almost all situations you'll not know the function value at the optimal parameters).