Hi all,
As mentioned last week, a student worked for me during the last few
months
on Orekit. His name is Andrea Antolino and his task wes to implement
Taylor differential algebra for orekit propagators.
This work is almost complete by now. We are fixing some lose ends. It
will
be committed to the main git repository soon (don't ask me when ...).
Andrea did
present it on November 1st during the Stardust conference at ESTEC, in
the Orbit
and Uncertainty Propagation track.
The approach we followed was to create generic Field-based
propagators, so users
can do things like:
FieldPropagator<SomeField> propagator = new
FieldNumericalPropagator<SomeField>(...);
(or some other supported analytical propagator if desired) and then to
use it just like
a regular double-based propagator.
In order to use Taylor differential algebra, users just have to use
DerivativeStructure
as the field for propagated elements. It is up to them to decide upon
the differentiation
order and the number of parameters. The most classical use for this is
to select an
order of about 3 and a number of parameters of 6, each parameter
corresponding to
one of the component of initial states. Of course, any type of orbit
representation can
be used (Cartesian, Keplerian, Circular, Equinoctial). As the field
elements (here
DerivativeStructures instances) are propagated, you have at all time,
both during
propagation in step handlers and in events handlers and at the filnal
time the full set
of partial derivatives up to the configured order. This mean you
propagate uncertainties
to a higher order than classical state transition matrices (or
covariance matrices) which
are inherently first order only. So if for example you select to
retrive the output state
as Cartesian coordinates, you still see the uncertainty cloud bend to
follow the orbit
shape, you do not get only an ellipsoid mainly tangent to the orbit.
Another very important
use is Monte-Carlo simulations. You do the same as before, propagate
once, and then on the
result DerivativeStructure instances, you can call the taylor(dp1,
dp2, ...) method to
evaluate the Taylor expansion to small offsets in the parameters. The
offsets dp1, dp2 ...
are drawn randomly to correspond to your initial distribution and you
get the corresponding
output without redoing the propagation: you are just evaluating low
degree polynomials.
This speeds up Monte-Carlo simulations a lot and you can do
Monte-Carlo analysis with
hundred of thousands or millions of points if you want. Of course, the
initial propagation
is much slower, especially if you have high order, but the overhead is
done only once, so
the complete cost is easily paid for for any realistic Monte-Carlo
size. As an example,
the overhead multiplicative factor with respect to a traditional
double-based propagation
and derivatives computed with respect to 6 parameters is 68.13 for
degree 2 derivatives
and 116.2 for degree 3, using a numerical propagator and all available
force models. One has
to remember that for a DerivativeStructure with p parameters and order
o, we compute
(o+p)!/(o! p!) components instead of 1 component in double-based
computation. Taking this
combinatorial factor to normalize the overhead, we find that for
parameters and orders
between 1 and 6, the normalized overhead is about 2 to 5, which seems
quite reasonable.
Beware! Do not try to use too large o or p, as the combinatorial
factors grows up quickly!
For example using 6x6, each double number is replaced by a 924
components DerivativeStructure,
the un-normalized overhead is more than 3700, and the normalized
overhead is between 4 and 5.
The field output is mainly a FieldSpacecraftState, but of course user
can also use derived
parameters (position/velocity/acceleration, in any frame, inertial or
rotating, geodetic
coordinates ...). So for example you could propagate an uncertainty in
geodetic coordinates
with a rotating Earth, for example to estimate a ground impact zone,
supposing you are
able to reliably propagate down to ground, which of course is not
really easy ...
Andrea already implemented Keplerian, Eckstein-Hechler and numerical
propagators with a bunch
of force models. We may add SGP4/SDP4 also, it is quite
straightforward to do.
A side-effect of our approach is that we can use it to implement other
things apart from
Taylor differential algebra (high accuracy, interval arithmetics,
...). This side effect
was in fact used for validating the implementation using Decimal64 as
the field to get back
to a (costly) way to redo exactly the same as regular double computation.
Now for the ugly part ... In order to do this work, *many* classes had
to be almost duplicated
with a FieldXYZ implementation that mimicked the regular double-based
implementation. Those
of you who followed Apache Commons Math and later Hipparchgus
development know what that means,
as it is exactly what has been done for the ODE integrators. By the
way, this Taylor differential
algebra idea was a main driver even at that time, the classes in
Hipparchus were developed so
this work in Orekit can be done later on. This approach leads to a
huge amount of code and a
maintenance hell ahead of us. Perhaps we should revive the approach of
automatic conversion in
the spirit of what I created years ago with the Nabla project at
Apache. I don't know if we
should put Andrea code directly in the main library or split Orekit in
multiple modules, one
being Field classes (including the existing ones added in the last few
years for other purposes).
It is however not as easy as it first looks, because in some cases
instead of new independant
classes, we just added Field methods in existing classes and
interfaces. An already existing
example in 8.0 is for example the accelerationDerivatives method in
ForceModel interface, which
already needs FieldVector3D and FieldRotation (but specifically with
DerivativeStructure as
the field, which by the way was a stupid design decision I made).
So what do you think? Is this feature interesting? Should it be merged
in the main library?
Should we spawn some modules? Can we do that for 9.0?