Ptensors¶
A ’th order permutationally covariant tensor or Ptensor for short, with
reference domain
is a
’th order tensor
, where
is the number
of channels. The elements of the reference domain are called atoms.
The defining property of Ptensors is that if
are permuted
by a permutation
, then
transforms to a Ptensor
with
Currently ptens supports zeroth, first and second order Ptensors. The corresponding classes are
ptensor0
, ptensor1
and ptensor2
. Each of these classes is derived
torch.Tensor
, allowing all the usual PyTorch arithmetic operations to be applied to Ptensors.
Note, however, that some of these operations might break equivariance. For example, changing
just one slice or one element of a Ptensor is generally not an equivariant
operation.
Creating Ptensors¶
Ptensors can be created with the familiar zeros or randn constructors. For example,
>> A=ptens.ptensor0.randn([2],5)
creates a zeroth order PTensor with reference domain and 5 channels.
Printing out the Ptensor prints both its contents and its reference domain:
>> print(A)
Ptensor0 [2]:
[ -1.97856 -1.72226 -0.0215097 -2.61169 1.3889 ]
For higher order Ptensors, the size of the first dimensions is inferred from the
size of the reference domain. For example, the following creates a first order Ptensor over 3 atoms:
>> B=ptens.ptensor1.randn([1,2,3],5)
>> print(B)
Ptensor1 [1,2,3]:
[ 0.0515154 -0.0194946 -1.39105 -1.38258 0.658819 ]
[ 0.85989 0.278101 0.890897 -0.000561227 1.54719 ]
[ 1.22424 -0.099083 -0.849395 -0.396878 -0.119167 ]
Similarly, the following creates and prints out a second order Ptensor over the reference domain
:
>> C=ptens.ptensor2.randn([1,2,3],5)
>> print(C)
Ptensor2 [1,2,3]:
channel 0:
[ 0.619967 0.703344 0.161594 ]
[ -1.07889 1.21051 0.247078 ]
[ 0.0626437 -1.48677 -0.117047 ]
channel 1:
[ -0.809459 0.768829 0.80504 ]
[ 0.69907 -0.824901 0.885139 ]
[ 1.45072 -2.47353 -1.03353 ]
channel 2:
[ -0.481529 -0.240306 2.9001 ]
[ 1.07718 -0.507446 1.1044 ]
[ 1.5038 -1.10569 0.210451 ]
channel 3:
[ -0.172885 0.117831 -0.62321 ]
[ 0.201925 -0.486807 0.0418346 ]
[ 0.041158 1.72335 -0.199498 ]
channel 4:
[ 0.375979 3.05989 1.30477 ]
[ -1.76276 -0.139075 -0.349366 ]
[ -0.0366747 -0.563576 0.233288 ]
For debugging purposes ptens also provides a sequential
initializer, e.g.:
>> A=ptens.ptensor1.sequential([1,2,3],5)
>> print(A)
Ptensor1 [1,2,3]:
[ 0 1 2 3 4 ]
[ 5 6 7 8 9 ]
[ 10 11 12 13 14 ]
By default Ptensors are placed on the host (CPU). To instead create the Ptensor on the
GPU, similarly to PyTorch, one can add a device
argument:
>> A=ptens.ptensor1.sequential([1,2,3],5,device='cuda')
Further, Ptensors can be moved back and forth between the CPU and the GPU using the to
method:
>> B=A.to('cpu')
In general, if the inputs of a given operation are on the GPU, the operation will be performed on the GPU, and the result is also placed on the GPU. Currently, ptens only supports using a single GPU.