Tensor¶
Block-sparse tensor with symmetry-aware indices.
Tensor
dataclass
¶
Tensor(
indices: Tuple[Index, ...],
itags: Tuple[str, ...],
data: MutableMapping[BlockKey, Tensor],
intw: Optional[MutableMapping[BlockKey, Bridge]] = None,
dtype: dtype = torch.float64,
label: str = "Tensor",
_sorted_keys: Optional[Tuple[BlockKey, ...]] = None,
)
Block-sparse tensor backed by symmetry-aware indices and dense blocks.
Each Tensor pairs an ordered tuple of Index instances with a mapping from
block keys (one charge per axis) to dense PyTorch tensors. Arithmetic operations
are defined in a way that preserves charge conservation, and helper methods
provide convenient constructors and transformations.
Attributes:
| Name | Type | Description |
|---|---|---|
indices |
Tuple[Index, ...]
|
Ordered tuple of |
itags |
Tuple[str, ...]
|
Ordered tuple of human-readable labels for each index. |
data |
MutableMapping[BlockKey, Tensor]
|
Mapping from block keys (one charge per axis) to dense PyTorch tensors. |
intw |
Optional[MutableMapping[BlockKey, Bridge]]
|
Mapping from block keys to intertwiners delegated to Yuzuha protocol. |
dtype |
dtype
|
Data type for the dense blocks. Defaults to double precision real values. |
label |
str
|
Human-readable label for the tensor. Defaults to "Tensor". |
device |
device
|
Device where tensor blocks are stored (CPU or GPU). |
Methods:
| Name | Description |
|---|---|
zeros |
Create a symmetry-aware tensor with admissible zero-filled blocks. |
random |
Create a tensor filled with random values for each admissible block. |
from_scalar |
Create a scalar (0D tensor) with a single value. |
is_scalar |
Check if this tensor is a scalar (0D). |
item |
Extract the scalar value from a 0D tensor. |
norm |
Compute the Frobenius norm aggregated across all dense blocks. |
clone |
Create a deep clone of this tensor with independent block data. |
rand_fill |
In-place: Fill all data blocks with random values. |
insert_index |
In-place: Insert a trivial index (neutral charge, dimension 1) at a position. |
normalize_sectors |
In-place: Remove sectors from each index that do not appear in any block. |
trim_zero_blocks |
In-place: Remove blocks whose magnitude is negligible relative to the norm. |
device |
Property returning the device where tensor blocks are stored. |
to |
Move tensor to specified device (CPU, CUDA, MPS, etc.). |
cpu |
Move tensor to CPU. |
cuda |
Move tensor to CUDA device. |
requires_grad |
Property for checking/setting gradient tracking. |
backward |
Compute gradients by backpropagating through the computational graph (scalars only). |
group |
Property returning the symmetry group of this tensor. |
sorted_keys |
Property returning block keys in display order (cached). |
key |
Get the BlockKey for the i-th block (1-indexed, matching display). |
block |
Access the i-th block by integer index (1-indexed, matching display). |
show |
Display selected blocks without max_line limits. |
regularize |
In-place: Canonicalize or regularize Bridge weights, and compress components. |
conj |
Complex conjugate every dense block, and revert all index directions. |
permute |
Permute tensor axes according to the provided reordering. |
transpose |
Transpose by reversing all tensor axes. |
invert |
In-place: Invert the direction of specified index/indices. |
retag |
Retag indices: update specific tags by name/index, or replace all tags. |
Notes
Standalone functions in nicole.maneuver deep-clone all data blocks for full
isolation. Method forms default to in_place=False and share storage (torch views).
Attributes¶
sorted_keys
property
¶
Return block keys sorted in display order (cached).
requires_grad
property
writable
¶
Check if this tensor tracks gradients.
Returns True if all underlying blocks have requires_grad=True, False otherwise.
Returns:
| Type | Description |
|---|---|
bool
|
Whether this tensor tracks gradients |
Examples:
Functions¶
zeros
classmethod
¶
zeros(
indices: Sequence[Index],
dtype: dtype = torch.float64,
itags: Optional[Sequence[str]] = None,
device: Optional[Union[str, device]] = None,
requires_grad: bool = False,
) -> Tensor
Create a symmetry-aware tensor with admissible zero-filled blocks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
indices
|
Sequence[Index]
|
Sequence of Index objects defining the tensor structure |
required |
dtype
|
dtype
|
Data type for the tensor blocks (default: torch.float64) |
float64
|
itags
|
Sequence[str]
|
Tags for each index (default: "init" for all) |
None
|
device
|
str or device
|
Device to place tensors on (default: current default device) |
None
|
requires_grad
|
bool
|
If True, enables gradient tracking for this tensor (default: False) |
False
|
Notes
Gradient tracking follows PyTorch's default behavior. Set requires_grad=True to enable autograd for this tensor. Use torch.no_grad() context to temporarily disable gradient computation during operations.
MPS (Apple Silicon) doesn't support float64/complex128. If creating on MPS with these dtypes, they will be automatically downgraded to float32/complex64.
For generic symmetry groups (e.g., SU2), intertwiners (intw) are automatically populated with Bridge objects containing default Clebsch-Gordan specifications.
random
classmethod
¶
random(
indices: Sequence[Index],
dtype: dtype = torch.float64,
seed: Optional[int] = None,
itags: Optional[Sequence[str]] = None,
device: Optional[Union[str, device]] = None,
requires_grad: bool = False,
) -> Tensor
Create a tensor filled with random values for each admissible block.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
indices
|
Sequence[Index]
|
Sequence of Index objects defining the tensor structure |
required |
dtype
|
dtype
|
Data type for the tensor blocks (default: torch.float64) |
float64
|
seed
|
int
|
Random seed for reproducibility |
None
|
itags
|
Sequence[str]
|
Tags for each index (default: "init" for all) |
None
|
device
|
str or device
|
Device to place tensors on (default: current default device) |
None
|
requires_grad
|
bool
|
If True, enables gradient tracking for this tensor (default: False) |
False
|
Notes
Gradient tracking follows PyTorch's default behavior. Set requires_grad=True to enable autograd for this tensor. Use torch.no_grad() context to temporarily disable gradient computation during operations.
MPS (Apple Silicon) doesn't support float64/complex128. If creating on MPS with these dtypes, they will be automatically downgraded to float32/complex64.
For generic symmetry groups (e.g., SU2), intertwiners (intw) are automatically populated with Bridge objects containing default Clebsch-Gordan specifications.
from_scalar
classmethod
¶
from_scalar(
value: Union[int, float, complex],
dtype: dtype = torch.float64,
label: str = "Scalar",
device: Optional[Union[str, device]] = None,
requires_grad: bool = False,
) -> Tensor
Create a scalar (0D tensor) with a single value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
int, float, or complex
|
Scalar value |
required |
dtype
|
dtype
|
Data type (default: torch.float64) |
float64
|
label
|
str
|
Label for the scalar (default: "Scalar") |
'Scalar'
|
device
|
str or device
|
Device to place tensor on (default: current default device) |
None
|
requires_grad
|
bool
|
If True, enables gradient tracking for this tensor (default: False) |
False
|
Notes
MPS (Apple Silicon) doesn't support float64/complex128. If creating on MPS with these dtypes, they will be automatically downgraded to float32/complex64.
rand_fill
¶
Fill all data blocks with random values in-place.
insert_index
¶
Insert a trivial index (neutral charge, dimension 1) at a specified position.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
position
|
int
|
Position where the new index should be inserted (0-indexed). Must be in range [0, len(self.indices)]. |
required |
direction
|
Direction
|
Direction for the new index (Direction.IN or Direction.OUT). |
required |
itag
|
Optional[str]
|
Optional tag for the new index. If None, uses "init". |
None
|
Notes
This operation modifies the tensor in-place by:
- Inserting a new index with a single sector (neutral charge, dimension 1)
- Adding a singleton dimension to all data blocks at the corresponding axis
- Updating block keys to include the neutral charge at the new position
The symmetry group for the new index is taken from the existing indices.
For non-Abelian groups (e.g. SU(2)), each intertwiner (Bridge) is updated
via Bridge.insert_edge, which inserts the neutral-charge edge and
applies the appropriate R-symbol so that the result is consistent with
a direct permutation of the new index to position. The OM dimension
is preserved exactly since the neutral irrep does not participate in coupling.
trim_zero_blocks
¶
Remove blocks whose data is negligible relative to the tensor's overall scale.
This operation modifies the tensor in-place by:
- Removing blocks from self.data where max(abs(values)) < eps * norm
- For generic groups, also removing blocks where all weights are similarly negligible
- Updating each index to only include sectors that still have data in remaining blocks
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
eps
|
float or None
|
Relative tolerance. A block is considered zero when its maximum absolute
value is less than |
None
|
Notes
Using the Frobenius norm as the scale makes the criterion fully relative: a block is trimmed only when it is negligible compared to the tensor as a whole, regardless of the absolute magnitude of individual entries.
If the tensor is identically zero (norm == 0) all blocks are removed.
normalize_sectors
¶
Remove sectors from each index that do not appear in any block.
This operation modifies the tensor in-place by updating self.indices
so that only sectors whose charges are referenced by at least one block
key are retained.
regularize
¶
Canonicalize (2nd order) or regularize (higher order) Bridge weights.
For a 2nd order non-Abelian tensor (SU(2) matrix), the reduced data R
and the Bridge weight W satisfy:
physical block = R × W
The method absorbs the deviation of each block's weight from the
canonical value sqrt(irrep_dim(q)) into R, so that after the
call the tensor uses the same Bridge-weight convention as
identity:
physical block = R_new × sqrt(irrep_dim(q))
Both branches use a row-normalization strategy, differing only in target:
-
2nd-order: By Schur's lemma
om = 1, so each weight row is a single scalarW[i, 0]. The factor is absorbed into the corresponding data component so that the canonical positive valuesqrt(irrep_dim(q))is enforced:factor[i] = W[i, 0] / sqrt(irrep_dim(q)) W_new[i, 0] = sqrt(irrep_dim(q)) R_new[..., i] = R[..., i] * factor[i] -
Higher-order: each row is normalized to unit norm, with the norm absorbed into the data:
norms[i] = ‖W[i, :]‖ W_new[i, :] = W[i, :] / norms[i] R_new[..., i] = R[..., i] * norms[i]
After normalization, linearly dependent components are removed via
BlockSchema.block_compress: an SVD is applied to the (now well-scaled)
weight matrix and rows whose singular values fall below cutoff are
discarded. For 2nd-order tensors this step is always a no-op because
Schur's lemma forces om = 1 and therefore num_components == 1
should always hold per block.
Has no effect on Abelian tensors or tensors without an intertwiner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cutoff
|
float
|
Singular value threshold forwarded to |
1e-14
|
block
¶
Access the i-th block by integer index (1-indexed, matching display).
conj
¶
Complex conjugate every dense block if dtype is complex, and revert all index directions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_place
|
bool
|
If True, modifies this tensor in-place and returns self. If False (default), returns a new Tensor instance with conjugated data (as views for complex dtype) and flipped directions. The underlying torch tensors are not cloned - torch.conj() returns a view for complex dtypes, and real dtypes share the same tensors. |
False
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Self if in_place=True, new Tensor instance if in_place=False. |
Examples:
>>> # Functional style (default, efficient with sharing)
>>> t2 = t1.conj()
>>> t2 is not t1 # Different Tensor instances
>>> # But for complex dtype, t2.data shares storage with t1.data (as conjugate views)
>>>
>>> # In-place style (allows chaining)
>>> result = t1.conj(in_place=True)
>>> result is t1 # Returns self for chaining
permute
¶
Permute tensor axes according to the provided reordering.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
order
|
Sequence[int]
|
Sequence of integer axes specifying the new ordering. Must be a permutation of range(len(self.indices)). |
required |
in_place
|
bool
|
If False (default), returns a new Tensor instance with permuted axes. The data blocks share the same underlying storage (torch.permute creates views). If True, modifies this tensor in-place and returns self. |
False
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Self if in_place=True, new Tensor instance if in_place=False. |
Notes
For non-Abelian (SU2) tensors, permutation involves R-symbols that transform the outer multiplicity (OM) indices. The weights are updated by matrix multiplication with the R-symbol: new_weights = old_weights @ R.
Examples:
>>> # Functional style (default, efficient with sharing)
>>> t2 = t.permute([2, 0, 1])
>>> t2 is not t # Different Tensor instances
>>> # But t2.data blocks share storage with t.data (as permuted views)
>>>
>>> # In-place style (allows chaining)
>>> result = t.permute([2, 0, 1], in_place=True)
>>> result is t # Returns self for chaining
transpose
¶
Transpose tensor axes by reversing the index order.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_place
|
bool
|
If False (default), returns a new Tensor instance with reversed axes. If True, modifies this tensor in-place and returns self. |
False
|
Returns:
| Type | Description |
|---|---|
Tensor
|
New Tensor instance if in_place=False, self if in_place=True. |
retag
¶
retag(
mapping_or_axes: Union[
Mapping[str, str], Sequence[str], int, Sequence[int]
],
new_tags: Optional[Union[str, Sequence[str]]] = None,
) -> None
Retag indices using one of three modes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mapping_or_axes
|
Union[Mapping[str, str], Sequence[str], int, Sequence[int]]
|
Can be one of:
|
required |
new_tags
|
Optional[Union[str, Sequence[str]]]
|
New tag(s) to use when mapping_or_axes is an integer or sequence of integers. Can be a single string or sequence of strings. Must match the length of mapping_or_axes. |
None
|
Examples:
>>> # Mode 1: Mapping (update specific tags by name)
>>> tensor.retag({"a": "left", "b": "right"})
>>>
>>> # Mode 2: Full replacement (replace all tags)
>>> tensor.retag(["left", "middle", "right"])
>>>
>>> # Mode 3: Selective update by position
>>> tensor.retag([0, 2], ["left", "right"])
>>> tensor.retag(0, "left") # Single index and tag
invert
¶
Invert the direction of specified index/indices while maintaining charge conservation.
This operation inverts the direction(s) and conjugates the charge(s) using Index.dual(), effectively inverting the tensor's index structure at the specified positions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
positions
|
Union[int, Sequence[int]]
|
Index position(s) to invert. Can be a single int or a sequence of ints. Positions are 0-indexed. |
required |
Warnings
Use with extreme caution! This method is supposed to work in isolation.
For inverting a bond between two tensors, use capcup instead, which
applies the necessary Frobenius–Schur phase for SU(2).
Notes
This operation uses Index.dual() to invert both the direction and conjugate the charges, ensuring charge conservation is maintained. Both the index metadata and the block keys are updated to reflect the conjugated charges. The tensor data arrays themselves remain unchanged.
This differs from Index.flip() which only reverses direction without conjugating charges. The tensor invert operation performs a complete inversion of the index structure (direction + charge conjugation).
For non-Abelian groups (e.g. SU(2)), the intertwiner (Bridge) at each affected block has its edge directions inverted at the corresponding positions without any additional phase factor, ensuring that two successive calls to invert() with the same positions restore the original tensor exactly.
Examples:
to
¶
Move tensor to specified device.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
device
|
str or device
|
Target device ('cpu', 'cuda', 'mps', etc.) |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
New tensor on the specified device |
Notes
MPS (Apple Silicon) doesn't support float64/complex128. If moving a tensor with these dtypes to MPS, they will be automatically downgraded to float32/complex64.
backward
¶
Compute gradients by backpropagating through the computational graph.
This method can only be called on scalar tensors (0D tensors). It calls the backward() method on the underlying PyTorch tensor to compute gradients for all tensors in the computational graph that have requires_grad=True.
Raises:
| Type | Description |
|---|---|
ValueError
|
If the tensor is not a scalar (has more than 0 dimensions) |
Examples:
>>> # Create tensors with gradient tracking
>>> t = Tensor.random(indices, requires_grad=True)
>>>
>>> # Perform operations
>>> loss = contract(t, t, ...) # Some operation resulting in a scalar
>>>
>>> # Compute gradients
>>> loss.backward()
>>>
>>> # Access gradients from underlying PyTorch tensors
>>> for block in t.data.values():
... print(block.grad)
Description¶
The Tensor class is the core data structure in Nicole, representing block-sparse tensors backed by symmetry-aware indices. Each tensor stores a collection of dense PyTorch tensor blocks, where each block corresponds to a specific combination of charges that satisfies charge conservation rules.
Key Features¶
- Block-sparse storage: Only admissible blocks are stored
- Automatic charge conservation: Selection rules enforced by structure
- PyTorch-backed blocks: Dense operations within each symmetry sector
- Device management: CPU and GPU (CUDA/MPS) support
- Autograd control: Optional gradient tracking
- Immutable indices: Index structure fixed at creation
See Also¶
- Index: Define tensor index structure
- zeros: Create zero tensor
- random: Create random tensor
- Examples: Creating Tensors
- Examples: Arithmetic
Notes¶
Tensors are mutable objects. Use clone() when independence is needed. For functional (non-mutating) operations, see the operators module.
Charge conservation is enforced: ∑(OUT charges) - ∑(IN charges) = neutral element.