Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@reneeotten
Copy link
Contributor

This PR adds the AMPGO algorithm to lmfit as discsussed in Issue #440. It follows the original Python implementation by Andrea Gavana - I have only updated/simplified the code here and there, without changing the actual outcome. It's not ready yet, but I would appreciate some comments/suggestions.

I am still not completely sure about the benchmark comparisons as shown on the website regarding the number of function evaluations, convergence and such. For example, the ampgo function parameter maxfunevals is used as maxiter in the local solver, which isn't the same as function evaluations. Also, the only convergence criteria in the ampgo code is best_f < fmin + glbtol, otherwise the code will just continue until it reaches totaliter or maxfunevals. From the code it seems he might have used the known fmin value in the benchmarks to determine when to stop, which (I think) isn't a completely fair comparison to differential_evolution and basinhopping as such convergence criteria doesn't exist there (i.e., they'll probably need more function evaluations to figure out that there is no improvement anymore and decide to stop). But I don't know how big the difference would be... it might not matter much for the overall picture.

Anyhow, I did run the scipy global benchmarks with the default settings for the three global optimizers (without using the fmin value for a more fair comparison). Providing that I did this correctly and looking only at the 100%-success cases for all three optimizers it definitely finds the solution in less iterations. However, there are also many cases where it doesn't find the correct solutions and this might have to do with the fact that if you don't specify maxfunevals it will use max(100, 10*npar), which means for most test functions a maximum of 100 function iterations (and typically a bit more evaluations). From the graph on the above-mentioned website it seems that increasing this number a bit more will improve the situation quite a bit, so I am playing around with this a bit for the scipy benchmarks to see if there is a sensible default value for this.

Sorry for the amount of text.... but in short a few questions:

  1. any comments/suggestions on the current implementation?
  2. should I remove the fmin parameter and associated convergence criteria from the code (in almost all situations you'll not know the function value at the optimal parameters).

@reneeotten
Copy link
Contributor Author

the test_itercb.py test now fails with the newest scipy (as discussed in #465). That should be fixed, but it's not related to current changes in the code.

@newville
Copy link
Member

newville commented Apr 7, 2018

@reneeotten Thanks!!! I'll have to look into this in more detail, but it looks like very nice work.

I think I agree with you on fmin: No one will know this value, so It seems best to remove it. That makes it look like the fit would only stop after a fixed number of iterations... which seems less than ideal. Maybe we should reach out to the original author?

@reneeotten
Copy link
Contributor Author

@newville I'll wait a bit to see if you (and hopefully others!) have more comments and then I'll go over it once more and try to finish it up. We can reach out to the author of the Python implementation, but it is as done as described in this paper.

I agree that it's not ideal to have a convergence criteria other than going through a fixed number of iterations/function evaluations. But I don't think there is much one can do here since there is no way to determine if you have reached a global minimum. In fact, that's also how basinhopping is implemented, where the default number of iterations is set to 100. There is an additional parameter there called niter_success (default to None though), that can be used to "Stop the run if the global minimum candidate remains the same for this number of iterations". We could think about adding something similar, or we just leave the implementation as is and set some sensible defaults for maxfunevals and totaliter; or we leave only the option to set the maximum number of iterations just as for basinhopping.

@newville
Copy link
Member

@reneeotten I've read the paper more carefully and the code too, and it seems that the fixed number of iterations (that is, not depending on fmin) does make some sense, as this is really for the "tunneling" part of the code, essentially to find starting points for the finishing refinement that are far apart. That is to say, I'm OK with this...

- add AMPGO code from Andrea Gavana
- remove main() function
- add module docstring
- remove OPENOPT solvers
- add/update function docstrings
- update "disp" statements (now boolean)
- add comments to the code
Do not pass "bounds" to the underlying minimizer anymore; use the
"lmfit" way of making sure that parameters stay within bounds.
When fitting a function (i.e., in the test) AMPGO returns an array
instead of a scalar for the function value.
…uations.

The optimization will stop after the specified number of iterations
("totaliter").
@reneeotten
Copy link
Contributor Author

@newville okay, I have update this PR so that it will merge cleanly. It uses now the apply_bounds_transformation to make sure that the parameters will stay within bounds for all local solvers. In addition, the default maxfunevals=None, will set the maximum number of function evaluations to np.inf and, therefore, the minimization will stop after the number of specified iterations (totaliter). I think that is the cleanest way of doing it right now, we might get a better idea on how to set an reasonable/optimal value for the function evaluations at some later point (that will require a bit of testing/benchmarking for which I don't have time right now). I think it should all work as intended, but would appreciate it if you could look over it once more before merging.

It might be good to get some release candidate out with these new options and solicit some real-world testing to resolve some issues before the actual release. Also, I am pretty sure with some of my last PRs not all the documentation/docstrings were updated perfectly, so I would like to fix that as well.

@reneeotten
Copy link
Contributor Author

slightly relaxed the assert_allclose test

@newville
Copy link
Member

newville commented May 1, 2018

@reneeotten Great! Will merge now. Unless there are objections, I'll tag this (perhaps not until tomorrow) as 0.9.10rc1, aiming for 0.9.10 by the end of May.

I think we may are very close to ready for 1.0.0, and can start to talk about what is left for that...

@newville newville merged commit b7d87b9 into lmfit:master May 1, 2018
@reneeotten reneeotten deleted the ampgo branch May 1, 2018 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants