random – Random number functionality#
The pytensor.tensor.random module provides random-number drawing functionality
that closely resembles the numpy.random module.
High-level API#
PyTensor assigns NumPy RNG states (i.e. Generator objects) to
each RandomVariable. The combination of an RNG state, a specific
RandomVariable type (e.g. NormalRV), and a set of distribution parameters
uniquely defines the RandomVariable instances in a graph.
This means that a “stream” of distinct RNG states is required in order to
produce distinct random variables of the same kind. RandomStream provides a
means of generating distinct random variables in a fully reproducible way.
RandomStream is also designed to produce simpler graphs and work with more
sophisticated Ops like Scan, which makes it a user-friendly random variable
interface in PyTensor.
For an example of how to use random numbers, see Using Random Numbers. For a technical explanation of how PyTensor implements random variables see Pseudo random number generation in PyTensor.
- class pytensor.tensor.random.RandomStream[source]#
This is a symbolic stand-in for
numpy.random.Generator.- updates()[source]#
- Returns:
a list of all the (state, new_state) update pairs for the random variables created by this object
This can be a convenient shortcut to enumerating all the random variables in a large graph in the
updateargument topytensor.function.
- seed(meta_seed)[source]#
meta_seedwill be used to seed a temporary random number generator, that will in turn generate seeds for all random variables created by this object (viagen).- Returns:
None
- gen(op, *args, **kwargs)[source]#
Return the random variable from
op(*args, **kwargs).This function also adds the returned variable to an internal list so that it can be seeded later by a call to
seed.
- uniform, normal, binomial, multinomial, random_integers, ...
See :ref: Available distributions
<_libdoc_tensor_random_distributions>.
from pytensor.tensor.random.utils import RandomStream rng = RandomStream() sample = rng.normal(0, 1, size=(2, 2)) fn = pytensor.function([], sample) print(fn(), fn()) # different numbers due to default updates
Low-level objects#
- class pytensor.tensor.random.op.RandomVariable(name=None, ndim_supp=None, ndims_params=None, dtype=None, inplace=None, signature=None)[source]#
An
Opthat produces a sample from a random variable.This is essentially
RandomFunction, except that it removes theouttypedependency and handles shape dimension information more directly.- R_op(inputs, eval_points)[source]#
Construct a graph for the R-operator.
This method is primarily used by
Rop.- Parameters:
inputs – The
Opinputs.eval_points – A
Variableor list ofVariables with the same length as inputs. Each element ofeval_pointsspecifies the value of the corresponding input at the point where the R-operator is to be evaluated.
- Return type:
rval[i]should beRop(f=f_i(inputs), wrt=inputs, eval_points=eval_points).
- default_output = 1[source]#
An
intthat specifies which outputOp.__call__()should return. IfNone, then all outputs are returned.A subclass should not change this class variable, but instead override it with a subclass variable or an instance variable.
- grad(inputs, outputs)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned
Variablerepresents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of typeNullTypefor that input.Using the reverse-mode AD characterization given in [1], for a \(C = f(A, B)\) representing the function implemented by the
Opand its two arguments \(A\) and \(B\), given by theVariables ininputs, the values returned byOp.gradrepresent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and \(\bar{B}\), for some scalar output term \(S_O\) of \(C\) in\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]- Parameters:
inputs – The input variables.
output_grads – The gradients of the output variables.
- Returns:
The gradients with respect to each
Variableininputs.- Return type:
grads
References
- make_node(rng, size, *dist_params)[source]#
Create a random variable node.
- Parameters:
rng (RandomGeneratorType) – Existing PyTensor
Generatorobject to be used. Creates a new one, ifNone.size (int or Sequence) – NumPy-like size parameter.
dtype (str) – The dtype of the sampled output. If the value
"floatX"is given, thendtypeis set topytensor.config.floatX. This value is only used whenself.dtypeisn’t set.dist_params (list) – Distribution parameters.
Results –
------- –
out (Apply) – A node with inputs
(rng, size, dtype) + dist_argsand outputs(rng_var, out_var).
- perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Applynode that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variableinnode.inputs.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variableinnode.outputs. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storagelist might contain data. If an element of output_storage is notNone, it has to be of the right type, for instance, for aTensorVariable, it has to be a NumPyndarraywith the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform(); they could’ve been allocated by anotherOp’sperformmethod. AnOpis free to reuseoutput_storageas it sees fit, or to discard it and allocate new memory.
- class pytensor.tensor.random.type.RandomGeneratorType[source]#
A Type wrapper for
numpy.random.Generator.The reason this exists (and
Genericdoesn’t suffice) is thatGeneratorobjects that would appear to be equal do not compare equal with the==operator.This
Typealso works with adictderived fromGenerator.__get_state__, unless thestrictargument toType.filteris explicitly set toTrue.- filter(data, strict=False, allow_downcast=None)[source]#
XXX: This doesn’t convert
datato the same type of underlying RNG type asself. It really only checks thatdatais of the appropriate type to be a validRandomGeneratorType.In other words, it serves as a
Type.is_valid_valueimplementation, but, because the defaultType.is_valid_valuedepends onType.filter, we need to have it here to avoid surprising circular dependencies in sub-classes.
- class pytensor.tensor.random.type.RandomType[source]#
A Type wrapper for
numpy.random.Generatorandnumpy.random.RandomState.