![]() |
Home | Main Page | Modules | Namespace List | Class Hierarchy | Alphabetical List | Data Structures | File List | Namespace Members | Data Fields | Globals | Related Pages |
#include <itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h>
This class implements a gradient descent optimizer with adaptive gain.
If is a cost function that has to be minimized, the following iterative algorithm is used to find the optimal parameters
:
The gain at each iteration
is defined by:
.
And the time is updated according to:
where equals
at iteration
. For
the InitialTime is used, which is defined in the the superclass (StandardGradientDescentOptimizer). Whereas in the superclass this parameter is superfluous, in this class it makes sense.
This method is described in the following references:
[1] Y. Qiao, B.P.F. Lelieveldt, M. Staring An efficient preconditioner for stochastic gradient descent optimization of image registration IEEE Transactions on Medical Imaging, 2019 https://doi.org/10.1109/TMI.2019.2897943
[2] P. Cruz Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case Technical Report, 2005. http://hdl.handle.net/2052/74
[3] S. Klein, J.P.W. Pluim, M. Staring, M.A. Viergever Adaptive stochastic gradient descent optimisation for image registration International Journal of Computer Vision, vol. 81, no. 3, pp. 227-239, 2009 http://dx.doi.org/10.1007/s11263-008-0168-y
It is very suitable to be used in combination with a stochastic estimate of the gradient . For example, in image registration problems it is often advantageous to compute the metric derivative (
) on a new set of randomly selected image samples in each iteration. You may set the parameter
NewSamplesEveryIteration
to "true"
to achieve this effect. For more information on this strategy, you may have a look at:
Definition at line 78 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
Static Public Member Functions | |
static Pointer | New () |
![]() | |
static Pointer | New () |
![]() | |
static Pointer | New () |
![]() | |
static Pointer | New () |
Protected Member Functions | |
AdaptiveStochasticPreconditionedGradientDescentOptimizer () | |
virtual void | UpdateCurrentTime () |
virtual | ~AdaptiveStochasticPreconditionedGradientDescentOptimizer () |
![]() | |
virtual double | Compute_a (double k) const |
StochasticPreconditionedGradientDescentOptimizer () | |
virtual void | UpdateCurrentTime () |
virtual | ~StochasticPreconditionedGradientDescentOptimizer () |
![]() | |
virtual void | CholmodSolve (const DerivativeType &gradient, DerivativeType &searchDirection, int solveType=CHOLMOD_A) |
PreconditionedGradientDescentOptimizer () | |
void | PrintSelf (std::ostream &os, Indent indent) const |
virtual | ~PreconditionedGradientDescentOptimizer () |
![]() | |
virtual void | GetScaledDerivative (const ParametersType ¶meters, DerivativeType &derivative) const |
virtual MeasureType | GetScaledValue (const ParametersType ¶meters) const |
virtual void | GetScaledValueAndDerivative (const ParametersType ¶meters, MeasureType &value, DerivativeType &derivative) const |
void | PrintSelf (std::ostream &os, Indent indent) const override |
ScaledSingleValuedNonLinearOptimizer () | |
void | SetCurrentPosition (const ParametersType ¶m) override |
virtual void | SetScaledCurrentPosition (const ParametersType ¶meters) |
~ScaledSingleValuedNonLinearOptimizer () override=default | |
Protected Attributes | |
DerivativeType | m_PreviousSearchDirection |
![]() | |
double | m_CurrentTime { 0.0 } |
![]() | |
cholmod_common * | m_CholmodCommon |
cholmod_factor * | m_CholmodFactor { nullptr } |
cholmod_sparse * | m_CholmodGradient { nullptr } |
double | m_ConditionNumber { 1.0 } |
DerivativeType | m_Gradient |
double | m_LargestEigenValue { 1.0 } |
double | m_LearningRate { 1.0 } |
DerivativeType | m_SearchDirection |
double | m_Sparsity { 1.0 } |
StopConditionType | m_StopCondition { MaximumNumberOfIterations } |
![]() | |
ScaledCostFunctionPointer | m_ScaledCostFunction |
ParametersType | m_ScaledCurrentPosition |
Private Attributes | |
double | m_SigmoidMax { 1.0 } |
double | m_SigmoidMin { -0.8 } |
double | m_SigmoidScale { 1e-8 } |
bool | m_UseAdaptiveStepSizes { true } |
Additional Inherited Members | |
![]() | |
using | cholmod_l = int CInt |
using itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::ConstPointer = SmartPointer<const Self> |
Definition at line 88 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
using itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::Pointer = SmartPointer<Self> |
Definition at line 87 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
using itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::Self = AdaptiveStochasticPreconditionedGradientDescentOptimizer |
Standard ITK.
Definition at line 84 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
using itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::Superclass = StochasticPreconditionedGradientDescentOptimizer |
Definition at line 85 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
|
protected |
|
inlineprotectedvirtual |
Definition at line 133 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
|
virtual |
Run-time type information (and related methods).
Reimplemented from itk::StochasticPreconditionedGradientDescentOptimizer.
Reimplemented in elastix::PreconditionedGradientDescent< TElastix >.
|
virtual |
|
virtual |
|
virtual |
|
virtual |
itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::ITK_DISALLOW_COPY_AND_MOVE | ( | AdaptiveStochasticPreconditionedGradientDescentOptimizer | ) |
|
static |
Method for creation through the object factory.
|
virtual |
Set/Get the maximum of the sigmoid. Should be >0. Default: 1.0
|
virtual |
Set/Get the maximum of the sigmoid. Should be <0. Default: -0.8
|
virtual |
Set/Get the scaling of the sigmoid width. Large values cause a more wide sigmoid. Default: 1e-8. Should be >0.
|
virtual |
Set/Get whether the adaptive step size mechanism is desired. Default: true
|
protectedvirtual |
Function to update the current time If UseAdaptiveStepSizes is false this function just increments the CurrentTime by . Else, the CurrentTime is updated according to:
time = max[ 0, time + sigmoid( -gradient*previoussearchdirection) ]
In that case, also the m_PreviousSearchDirection is updated.
Reimplemented from itk::StochasticPreconditionedGradientDescentOptimizer.
|
protected |
The Previous search direction = P g, necessary for the CruzAcceleration
Definition at line 146 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
|
private |
Definition at line 151 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
|
private |
Definition at line 152 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
|
private |
Definition at line 153 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
|
private |
Settings
Definition at line 150 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.
Generated on 1687403667 for elastix by ![]() |
![]() |