logo
WeChat Login
uestc
uestc
😅
油画🖼️,非官方
Verified

optfunc

optfunc provides differentiable PyTorch optfuncs and pytest-oriented helpers for evaluating first-order and second-order optimizers.

The package is published as optfuncs and imported as optfunc.

Install

For CPU-only use:

pip install "optfuncs[torch-cpu]"

With uv in this repository:

uv sync --extra torch-cpu

Supported Torch extras:

ExtraBackendTypical command
torch-cpuCPU, Linux/macOS/Windowsuv sync --extra torch-cpu
torch-cu118CUDA 11.8, Linux/Windows, PyTorch 2.7.xuv sync --extra torch-cu118
torch-cu126CUDA 12.6, Linux/Windowsuv sync --extra torch-cu126
torch-cu128CUDA 12.8, Linux/Windowsuv sync --extra torch-cu128
torch-cu130CUDA 13.0, Linux/Windowsuv sync --extra torch-cu130
torch-rocmROCm 7.1, Linuxuv sync --extra torch-rocm
torch-xpuIntel XPU, Linux/Windowsuv sync --extra torch-xpu

Only choose one Torch extra at a time. The extras are configured as mutually exclusive in pyproject.toml.

Helper scripts:

# bash/zsh on Linux or macOS, and Git Bash/MSYS/Cygwin on Windows ./scripts/sync_torch_variant.sh cpu ./scripts/sync_torch_variant.sh cu130 ./scripts/sync_torch_variant.sh rocm ./scripts/sync_torch_variant.sh xpu
# Windows PowerShell or PowerShell 7 on Windows/Linux/macOS .\scripts\sync_torch_variant.ps1 cpu .\scripts\sync_torch_variant.ps1 cu130 .\scripts\sync_torch_variant.ps1 rocm .\scripts\sync_torch_variant.ps1 xpu

If Windows blocks local PowerShell scripts, run:

powershell -ExecutionPolicy Bypass -File .\scripts\sync_torch_variant.ps1 cu130

Platform notes:

  • CPU works on Linux, macOS, and Windows.
  • CUDA extras work on Linux and Windows.
  • ROCm works on Linux.
  • XPU works on Linux and Windows.
  • macOS should use torch-cpu in this project configuration.

The backend indexes follow the official uv PyTorch guide and PyTorch install selector:

Optfunc Usage

The repository provides these built-in optfuncs:

Registry nameClassKnown minimizer
ackleyAckleyall zeros
dixonpriceDixonPricerecursive Dixon-Price optimum
griewankGriewankall zeros
levyLevyall ones
rastriginRastriginall zeros
rosenbrockRosenbrockall ones
rotatedhyperellipsoidRotatedHyperEllipsoidall zeros
schwefelSchwefelnear 420.968746 in every coordinate
sphereSphereall zeros
styblinskitangStyblinskiTangnear -2.903534 in every coordinate
sumsquaresSumSquaresall zeros
tridTridx_i = i * (d + 1 - i) with 1-based indexing
zakharovZakharovall zeros

Use a class directly when you know which function you want:

import torch from optfunc import Sphere opt_func = Sphere(dim=8, dtype=torch.float64) x = torch.zeros(8, dtype=torch.float64) value = opt_func(x) grad = opt_func.grad(x) hessian = opt_func.hessian(x) hvp = opt_func.hvp(x, torch.ones_like(x)) x_star = opt_func.global_minimizer() distance = opt_func.distance_to_optimum(x)

Use OptFuncRegistry when a test should select an optfunc by name:

from optfunc import OptFuncRegistry opt_func = OptFuncRegistry.create("rosenbrock", dim=4) print(OptFuncRegistry.available())

Each optfunc uses the same conventions:

  • input x is a 1-D PyTorch tensor with shape (dim,);
  • output is a scalar tensor;
  • grad, hessian, and hvp use PyTorch autograd unless a subclass provides a better implementation;
  • global_minimizer() returns a known theoretical minimizer when available;
  • distance_to_optimum(x) defaults to Euclidean distance to global_minimizer();
  • project_to_bounds(x) clamps a point into the optfunc's documented search box.

Pytest Optimizer Evaluation

The optimizer evaluation API lives in optfunc.testing. A standard test file has three parts:

  1. Define or import an optimizer function.
  2. Configure one or more OptimizerCase objects.
  3. Assign make_optimizer_tests(...) to a pytest-visible name such as test_my_optimizer.

Run the file with:

uv run pytest tests/test_my_optimizer.py -q --tb=short --optfunc-report

Each OptimizerCase becomes an independent pytest item. If one case fails or raises an exception, pytest continues running the remaining cases. --optfunc-report prints a final serial summary with function gap, distance to the theoretical optimum, gradient norm, Hessian information, step count, and error messages. Do not combine --optfunc-report with pytest-xdist -n in v1.

OptimizerBudget

OptimizerBudget describes the budget passed to the optimizer.

from optfunc.testing import OptimizerBudget budget = OptimizerBudget(max_steps=500, lr=0.05)

Fields:

  • max_steps: positive integer iteration limit.
  • lr: positive learning-rate-like scalar. The test harness does not enforce how the optimizer uses it; your optimizer reads it from problem.budget.lr.

For torch.optim.Adam, a typical use is:

optimizer = torch.optim.Adam([x], lr=problem.budget.lr) for step in range(problem.budget.max_steps): ...

ConvergenceTolerances

ConvergenceTolerances decides whether a finished optimizer run passes.

from optfunc.testing import ConvergenceTolerances tolerances = ConvergenceTolerances( value_gap=1e-8, x_distance=1e-4, grad_norm=1e-4, hessian_min_eig=0.0, )

Fields:

  • value_gap: maximum allowed absolute gap between final value and the known theoretical optimum value. Set to None to skip this check.
  • x_distance: maximum allowed distance from final x to the known theoretical minimizer. Set to None to skip this check.
  • grad_norm: maximum allowed Euclidean norm of the final gradient. Set to None to skip this check.
  • hessian_min_eig: optional lower bound on the final Hessian's smallest eigenvalue. Set to None to skip this check.

The report still computes available metrics even when a tolerance is None; None only disables that pass/fail check.

OptimizerCase

OptimizerCase describes one pytest item.

from optfunc.testing import OptimizerCase from optfunc.testing import OptimizerBudget, ConvergenceTolerances case = OptimizerCase( opt_func="sphere", dim=8, budget=OptimizerBudget(max_steps=350, lr=0.05), tolerances=ConvergenceTolerances(value_gap=1e-8, x_distance=1e-4, grad_norm=1e-4), start="near_minimizer", start_radius=0.5, seed=0, hessian_max_dim=32, )

Important fields:

  • opt_func: registry name such as "sphere" or an already-created TorchOptFunction instance.
  • dim: required when benchmark is a string.
  • budget: OptimizerBudget passed to the optimizer through problem.budget.
  • tolerances: ConvergenceTolerances used after the optimizer returns.
  • case_id: optional pytest id; by default this is like sphere[8].
  • x0: optional explicit initial point. If omitted, the harness builds one from start.
  • start: "near_minimizer", "random", or "zeros".
  • start_radius: offset size used by "near_minimizer".
  • seed: random seed used by "random".
  • device: optional torch device string, for example "cuda" or "cpu".
  • dtype: torch dtype, default torch.float64.
  • hessian_max_dim: largest dimension for dense Hessian report metrics. Larger cases skip dense Hessian metrics to avoid slow tests.

make_optimizer_tests

make_optimizer_tests converts an optimizer plus cases into a pytest test function.

from optfunc.testing import make_optimizer_tests test_my_optimizer = make_optimizer_tests( optimizer=my_optimizer, cases=[case1, case2], name="test_my_optimizer", )

Rules:

  • Assign the returned function to a module-level variable whose name starts with test_, otherwise pytest will not collect it.
  • optimizer must accept one OptimizationProblem argument.
  • The optimizer may return either a final torch.Tensor or an OptimizerResult.
  • Every case becomes an independent parametrized pytest item.

Standard Adam Test Example

This example shows the recommended shape for a user-owned optimizer wrapper. The test harness does not hide torch.optim.Adam; the user function decides how to initialize Adam, how to use the budget, and what history to expose.

Create tests/test_torch_adam.py:

import torch from optfunc.testing import ( ConvergenceTolerances, OptimizerBudget, OptimizerCase, OptimizerResult, make_optimizer_tests, optimizer_adapter, ) def scalar_item(value): return float(value.detach().cpu().item()) @optimizer_adapter(name="torch_adam") def torch_adam(problem): x = problem.x0.detach().clone().requires_grad_(True) optimizer = torch.optim.Adam([x], lr=problem.budget.lr) history = [] for step in range(1, problem.budget.max_steps + 1): optimizer.zero_grad(set_to_none=True) loss = problem.value(x) loss.backward() optimizer.step() with torch.no_grad(): x.copy_(problem.project_to_bounds(x.detach())) if step % 25 == 0 or step == problem.budget.max_steps: x_now = x.detach() grad = problem.grad(x_now) history.append( { "step": step, "value": scalar_item(problem.value(x_now)), "grad_norm": scalar_item(torch.linalg.vector_norm(grad)), "x_norm": scalar_item(torch.linalg.vector_norm(x_now)), } ) return OptimizerResult( final_x=x.detach().clone(), steps=problem.budget.max_steps, history=history, ) test_torch_adam = make_optimizer_tests( optimizer=torch_adam, cases=[ OptimizerCase( opt_func="sphere", dim=8, budget=OptimizerBudget(max_steps=350, lr=0.05), tolerances=ConvergenceTolerances( value_gap=1e-8, x_distance=1e-4, grad_norm=1e-4, ), start="near_minimizer", start_radius=0.5, ), OptimizerCase( opt_func="rosenbrock", dim=4, budget=OptimizerBudget(max_steps=1200, lr=0.02), tolerances=ConvergenceTolerances( value_gap=5e-4, x_distance=5e-2, grad_norm=1e-2, ), start="near_minimizer", start_radius=0.25, ), ], )

Run:

uv run pytest tests/test_torch_adam.py -q --tb=short --optfunc-report

The OptimizationProblem object passed into torch_adam exposes:

  • problem.opt_func: the selected TorchOptFunction;
  • problem.x0: the initial point for this case;
  • problem.budget: the OptimizerBudget;
  • problem.value(x): scalar objective value;
  • problem.grad(x): gradient;
  • problem.value_and_grad(x): objective and gradient;
  • problem.hessian(x): dense Hessian;
  • problem.hvp(x, v): Hessian-vector product;
  • problem.project_to_bounds(x): clamp into the optfunc's bounds.

For second-order methods, call problem.hessian(x) or problem.hvp(x, v) inside the same optimizer wrapper and return the same OptimizerResult shape.

Built-In Adam Helper

For smoke tests or examples, optfunc.testing.make_torch_adam() provides the same Adam loop as a convenience:

from optfunc.testing import make_torch_adam, make_optimizer_tests test_adam = make_optimizer_tests( optimizer=make_torch_adam(), cases=[...], )

For production optimizer tests, prefer writing the small wrapper yourself so the learning rate, projection, stopping rule, and history format are explicit in your test file.

Local Development

uv sync --extra torch-cpu uv run pytest -q --optfunc-report

On this repository's CUDA development path:

uv sync --extra torch-cu130 uv run pytest -q --optfunc-report

Release Flow

  1. Update the package version in pyproject.toml, or run uv version <version> --frozen.
  2. Create and push a Git tag named v<version>.
  3. CNB will publish that tag to PyPI through the tag_push pipeline.

uv version z.x.y --frozen rm -rf dist build src/*.egg-info uv lock git tag -a vz,x,y -m "Release vX.Y.Z" git push origin vz,x,y

Optfunc definitions are adapted from SFU's optimization benchmark collection:

https://www.sfu.ca/~ssurjano/optimization.html

Pinned

Optimization test function in python. 优化基准测试库
Python
1010
Recent updates
Optimization test function in python. 优化基准测试库
Python
1010