Utilities#

Paths#

class quantflow.utils.paths.Paths(*, t: float, data: ndarray[Any, dtype[float64]])#

Paths of a stochastic process

cross_section(t: float | None = None) ndarray[Any, dtype[float64]]#

Cross section of paths at time t

property df: DataFrame#

Paths as pandas DataFrame

integrate() Paths#

Integrate paths

mean() ndarray[Any, dtype[float64]]#

Mean of paths

model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'data': FieldInfo(annotation=ndarray[Any, dtype[float64]], required=True, description='paths'), 't': FieldInfo(annotation=float, required=True, description='time horizon')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

classmethod normal_draws(paths: int, time_horizon: float = 1, time_steps: int = 1000, antithetic_variates: bool = True) Paths#

Generate normal draws

Parameters:
  • paths – number of paths

  • time_horizon – time horizon

  • time_steps – number of time steps to arrive at horizon

  • antithetic_variates – whether to use antithetic variates

pdf(t: float | None = None, num_bins: int | None = None, delta: float | None = None, symmetric: float | None = None) DataFrame#

Probability density function of paths

Calculate a DataFrame with the probability density function of the paths at a given cross section of time. By default it take the last section.

Parameters:
  • t – time at which to calculate the pdf

  • num_bins – number of bins

  • delta – optional size of bins (cannot be set with num_bins)

  • symmetric – optional center of bins

plot(**kwargs: Any) Any#

Plot paths

It requires plotly installed

property samples: int#

Number of samples

std() ndarray[Any, dtype[float64]]#

Standard deviation of paths

property time: ndarray[Any, dtype[float64]]#

Time as numpy array

property time_steps: int#

Number of time steps

var() ndarray[Any, dtype[float64]]#

Variance of paths

property xs: list[ndarray]#

Time as list of list (for visualization tools)

property ys: list[list[float]]#

Paths as list of list (for visualization tools)

Marginal1D#

class quantflow.utils.marginal.Marginal1D#

Marginal distribution

call_option_transform(u: int | float | complex | ndarray | Series) int | float | complex | ndarray | Series#

Call option transform

cdf(x: ndarray[Any, dtype[float64]] | float) ndarray[Any, dtype[float64]] | float#

Compute the cumulative distribution function

Parameters:

n – Location in the stochastic process domain space. If a numpy array, the output should have the same shape as the input.

cdf_jacobian(x: ndarray[Any, dtype[float64]] | float) ndarray#

Jacobian of the cdf with respect to the parameters of the process. It is useful for optimization purposes if necessary.

Optional to implement, otherwise raises NotImplementedError if called.

abstract characteristic(u: int | float | complex | ndarray | Series) int | float | complex | ndarray | Series#

Compute the characteristic function on support points n.

characteristic_df(n: int | None = None, *, max_frequency: float | None = None, simpson_rule: bool = False) DataFrame#

Compute the characteristic function with n discretization points and a max frequency

domain_range() Bounds#

The space domain range for the random variable

This should be overloaded if required

frequency_range(max_frequency: float | None = None) float#

The frequency domain range for the characteristic function

This should be overloaded if required

mean() ndarray[Any, dtype[float64]] | float#

Expected value

This should be overloaded if a more efficient way of computing the mean

mean_from_characteristic() ndarray[Any, dtype[float64]] | float#

Calculate mean as first derivative of characteristic function at 0

model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

option_alpha() float#

Option alpha

option_support(points: int = 101, max_moneyness: float = 1.0) ndarray[Any, dtype[float64]]#

Compute the x axis.

option_time_value(n: int = 128, *, max_frequency: float | None = None, max_moneyness: float = 1, alpha: float = 1.1, simpson_rule: bool = False, use_fft: bool = False) TransformResult#

Option time value

option_time_value_transform(u: int | float | complex | ndarray | Series, alpha: float = 1.1) int | float | complex | ndarray | Series#

Option time value transform

This transform does not require any additional correction since the integrant is already bounded for positive and negative moneyess

pdf(x: ndarray[Any, dtype[float64]] | float) ndarray[Any, dtype[float64]] | float#

Computes the probability density (or mass) function of the process.

It has a base implementation that computes the pdf from the cdf method, but a subclass should overload this method if a more optimized way of computing it is available.

Parameters:

n – Location in the stochastic process domain space. If a numpy array, the output should have the same shape as the input.

pdf_from_characteristic(n: int | None = None, *, max_frequency: float | None = None, simpson_rule: bool = False, use_fft: bool = False, frequency_n: int | None = None) TransformResult#

Compute the probability density function from the characteristic function.

Parameters:
  • n – Number of discretization points to use in the transform. If None, use 128.

  • max_frequency – The maximum frequency to use in the transform. If not provided, the value from the frequency_range() method is used. Only needed for special cases/testing.

  • simpson_rule – Use Simpson’s rule for integration. Default is False.

  • use_fft – Use FFT for the transform. Default is False.

pdf_jacobian(x: ndarray[Any, dtype[float64]] | float) ndarray[Any, dtype[float64]] | float#

Jacobian of the pdf with respect to the parameters of the process. It has a base implementation that computes it from the cdf_jacobian method, but a subclass should overload this method if a more optimized way of computing it is available.

std() ndarray[Any, dtype[float64]] | float#

Standard deviation at a time horizon

std_from_characteristic() ndarray[Any, dtype[float64]] | float#

Calculate standard deviation as square root of variance

abstract support(points: int = 100, *, std_mult: float = 3) ndarray[Any, dtype[float64]]#

Compute the x axis.

variance() ndarray[Any, dtype[float64]] | float#

Variance

This should be overloaded if a more efficient way of computing the

variance_from_characteristic() ndarray[Any, dtype[float64]] | float#

Calculate variance as second derivative of characteristic function at 0