Useful Optimizer is a dedicated set of optimization algorithms for numeric problems. It's designed to provide a comprehensive collection of optimization techniques that can be easily used and integrated into any project.
The current version of Useful Optimizer is 0.1.0.
- A wide range of optimization algorithms.
- Easy to use and integrate.
- Suitable for various numeric problems.
- Having fun to play with the algorithms
To install Useful Optimizer, you can use pip:
pip install git+https://github.com/Anselmoo/useful-optimizer
Here's a basic example of how to use Useful Optimizer:
from opt.benchmark import CrossEntropyMethod
from opt.benchmark.functions import shifted_ackley
optimizer = CrossEntropyMethod(
func=shifted_ackley,
dim=2,
lower_bound=-12.768,
upper_bound=+12.768,
population_size=100,
max_iter=1000,
)
best_solution, best_fitness = optimizer.search()
print(f"Best solution: {best_solution}")
print(f"Best fitness: {best_fitness}")
You can also use the new gradient-based optimizers:
from opt.stochastic_gradient_descent import SGD
from opt.adamw import AdamW
from opt.bfgs import BFGS
# Gradient-based optimization
sgd = SGD(func=shifted_ackley, lower_bound=-12.768, upper_bound=12.768, dim=2, learning_rate=0.01)
best_solution, best_fitness = sgd.search()
# Adam variant with weight decay
adamw = AdamW(func=shifted_ackley, lower_bound=-12.768, upper_bound=12.768, dim=2, weight_decay=0.01)
best_solution, best_fitness = adamw.search()
# Quasi-Newton method
bfgs = BFGS(func=shifted_ackley, lower_bound=-12.768, upper_bound=12.768, dim=2, num_restarts=10)
best_solution, best_fitness = bfgs.search()
The current version of Useful Optimizer includes 54 optimization algorithms, each implemented as a separate module. Each optimizer is linked to its corresponding source code for easy reference and study.
🧠 Gradient-Based Optimizers
These optimizers use gradient information to guide the search process and are commonly used in machine learning and deep learning applications.
- Adadelta - An adaptive learning rate method that uses only first-order information
- Adagrad - Adapts the learning rate to the parameters, performing smaller updates for frequently occurring features
- Adaptive Moment Estimation (Adam) - Combines advantages of AdaGrad and RMSProp with bias correction
- AdaMax - Adam variant using infinity norm for second moment estimation
- AdamW - Adam with decoupled weight decay for better regularization
- AMSGrad - Adam variant with non-decreasing second moment estimates
- BFGS - Quasi-Newton method approximating the inverse Hessian matrix
- Conjugate Gradient - Efficient iterative method for solving systems of linear equations
- L-BFGS - Limited-memory version of BFGS for large-scale optimization
- Nadam - Nesterov-accelerated Adam combining Adam with Nesterov momentum
- Nesterov Accelerated Gradient - Accelerated gradient method with lookahead momentum
- RMSprop - Adaptive learning rate using moving average of squared gradients
- SGD with Momentum - SGD enhanced with momentum for faster convergence
- Stochastic Gradient Descent - Fundamental gradient-based optimization algorithm
🦋 Nature-Inspired Metaheuristics
These algorithms are inspired by natural phenomena and biological behaviors to solve optimization problems.
- Ant Colony Optimization - Mimics ant behavior for finding optimal paths
- Artificial Fish Swarm Algorithm - Simulates fish behavior for global optimization
- Bat Algorithm - Inspired by echolocation behavior of microbats
- Bee Algorithm - Based on honey bee food foraging behavior
- Cat Swarm Optimization - Models cat behavior with seeking and tracing modes
- Cuckoo Search - Based on obligate brood parasitism of cuckoo species
- Eagle Strategy - Inspired by hunting behavior of eagles
- Firefly Algorithm - Based on flashing behavior of fireflies
- Glowworm Swarm Optimization - Inspired by glowworm behavior
- Grey Wolf Optimizer - Mimics leadership hierarchy and hunting of grey wolves
- Particle Swarm Optimization - Simulates social behavior of bird flocking or fish schooling
- Shuffled Frog Leaping Algorithm - Inspired by memetic evolution of frogs searching for food
- Squirrel Search Algorithm - Based on caching behavior of squirrels
- Whale Optimization Algorithm - Simulates social behavior of humpback whales
🧬 Evolutionary and Population-Based Algorithms
These algorithms use principles of evolution and population dynamics to find optimal solutions.
- CMA-ES - Covariance Matrix Adaptation Evolution Strategy for continuous optimization
- Cultural Algorithm - Evolutionary algorithm based on cultural evolution
- Differential Evolution - Population-based algorithm using biological evolution mechanisms
- Estimation of Distribution Algorithm - Uses probabilistic model of candidate solutions
- Genetic Algorithm - Inspired by Charles Darwin's theory of natural evolution
- Imperialist Competitive Algorithm - Based on imperialistic competition
🎯 Local Search and Classical Methods
Traditional optimization methods including local search techniques and classical mathematical approaches.
- Hill Climbing - Local search algorithm that continuously moves toward increasing value
- Nelder-Mead - Derivative-free simplex method for optimization
- Powell's Method - Derivative-free optimization using conjugate directions
- Simulated Annealing - Probabilistic technique mimicking the annealing process in metallurgy
- Tabu Search - Metaheuristic using memory structures to avoid cycles
- Variable Depth Search - Explores search space with variable-depth first search
- Variable Neighbourhood Search - Metaheuristic for discrete optimization problems
- Very Large Scale Neighborhood Search - Explores very large neighborhoods efficiently
🔬 Physics and Mathematical-Inspired Algorithms
Algorithms inspired by physical phenomena and mathematical concepts.
- Colliding Bodies Optimization - Physics-inspired method based on collision and explosion
- Harmony Search - Music-inspired metaheuristic optimization
- Sine Cosine Algorithm - Based on mathematical sine and cosine functions
- Stochastic Diffusion Search - Population-based search inspired by diffusion processes
- Stochastic Fractal Search - Inspired by fractal shapes and Brownian motion
📊 Statistical and Probabilistic Methods
Methods based on statistical inference and probabilistic approaches.
- Cross Entropy Method - Monte Carlo method for importance sampling and optimization
- Particle Filter - Statistical filter for nonlinear state estimation
- Parzen Tree Estimator - Non-parametric density estimation method
🔧 Specialized and Constrained Optimization
Specialized algorithms for particular types of optimization problems.
- Augmented Lagrangian Method - Method for solving constrained optimization problems
- Linear Discriminant Analysis - Statistical method for dimensionality reduction and classification
- Successive Linear Programming - Method for nonlinear optimization using linear approximations
- Trust Region - Robust optimization method using trusted model regions
Note
Please note that not all of these algorithms are suitable for all types of optimization problems. Some are better suited for continuous optimization problems, some for discrete optimization problems, and others for specific types of problems like quadratic programming or linear discriminant analysis.
Contributions to Useful Optimizer are welcome! Please read the contributing guidelines before getting started.
Warning
This project was generated with GitHub Copilot and may not be completely verified. Please use with caution and feel free to report any issues you encounter. Thank you!
Warning
Some parts still contain the legacy np.random.rand
call. See also: https://docs.astral.sh/ruff/rules/numpy-legacy-random/