We introduce a graphical user interface for constructing arbitrary tensor networks and specifying common operations like contractions or splitting, denoted GuiTeNet. Tensors are represented as nodes with attached legs, corresponding to the ordered dimensions of the tensor. GuiTeNet visualizes the current network, and instantly generates Python/NumPy source code for the hitherto sequence of user actions. Support for additional programming languages is planned for the future. We discuss the elementary operations on tensor networks used by GuiTeNet, together with high-level optimization strategies. The software runs directly in web browsers and is available online at

Tensor networks have found a wide range of applications within mathematics [

The GUI represents each tensor as a node with an arbitrary number of legs, corresponding to the number of dimensions (rank) of the tensor. The ordering of the dimensions is indicated by labels. Figure

A single tensor with 4 legs (dimensions). The ordering of dimensions is indicated by the red labels.

The user interacts with the GUI mainly via drag-and-drop gestures, to add tensors to the network or attach legs to a tensor, and to specify operations like contractions and tensor splitting; see below for more details. The GuiTeNet framework visualizes the current tensor network, and simultaneously generates source code which implements the hitherto sequence of user actions. For example, the generated Python code for a contraction of three tensors followed by QR splitting reads:

A new tensor is added to the network by a drag-and-drop gesture. The user drags a special “create tensor” symbol (blue circle in Figure

Creating a new tensor by a drag-and-drop gesture. The mouse pointer is enlarged for visual clarity.

Each leg represents one dimension of the tensor. The user creates a new leg by “pulling” it out of the tensor (i.e., drag-and-drop on the tensor), when simultaneously holding the Control key. Each tensor and its legs can still be freely moved around within the GUI window.

Tensor contractions are specified by connecting the tips of tensor legs. The tips snap to each other when brought into close contact. The actual contraction (possibly of several tensors) is executed when pressing the “Contract” button of the GUI, see Figure

Illustration of an elementary contraction operation.

The splitting of a tensor by QR or singular value decomposition (SVD) is a ubiquitous operation in tensor network algorithms, in particular for reducing “bond dimensions” by devising a singular value cut-off tolerance, and a prerequisite for working with left- and right-orthogonal tensors in the MPS framework [

QR splitting of a tensor. In

The initial reordering of dimensions becomes a separate “elementary transposition operation”, as described below. The generated code uses a temporary tensor for this purpose. In Figure

After this reordering, the partitioning is simply a reinterpretation of the data stored in the tensor, since the “row” group now consists of the first

Somewhat analogous to an intermediate representation in source code compilation, we decompose the actions supported by the GUI into the following elementary operations on tensor networks:

The GuiTeNet framework supports general contraction operations on a tensor network. An

On the other hand, a sequence of tensor network operations can be optimized by merging subsequent elementary contractions into a single elementary contraction. As simple (toy model) illustration why this might be useful, consider the contraction

To uniquely specify a contraction operation, we follow NumPy’s _{0}, _{0}, _{1}, _{1}, …, _{out}). Here the _{i}_{i}_{out}

_{0} = (0, 1, 2), _{1} = (0, 1, 3), _{2} = (0, 4), _{3} = (4, 5) and _{out} = (2, 3, 5).

Thus, the three dimensions of tensor _{0} are labeled 0, 1, 2, the three dimensions of tensor _{1} are labeled 0, 1, 3 etc. The dimensions labeled 0, 1 and 4 will be contracted since they appear multiple times, and the remaining dimensions are ordered as (2, 3, 5) in the output tensor. The generated Python source code follows exactly this scheme and reads explicitly

Formally, a tensor transposition is a permutation of dimensions, generalizing the usual transposition of matrices. For example, applying the permutation (1, 2, 0) to a 10 × 11 × 12 tensor

Specifying a transposition only requires designating the permutation of dimensions. We follow the convention of NumPy’s

Regarding transpositions as separate elementary operations — instead of first step for splitting a tensor for example — facilitates additional optimizations. A plausible scenario is integrating the transposition into a preceding contraction operation [^{T}^{T} = BA^{T}

The elementary QR decomposition considered here does not involve any reordering of dimensions. Thus, it is uniquely specified by the number

To illustrate, the generated Python code (up to renaming variables) for the elementary QR decomposition of a tensor

The

The (de-)matricization process for an elementary singular-value decomposition (SVD) of a tensor is analogous to the elementary QR decomposition. The output now consists of three tensors, corresponding to the

Based on the elementary tensor network operations, several high-level optimization strategies are conceivable, solely based on the rank of each tensor instead of the actual dimensions. The implementation of the following ideas is left for future work.

A natural representation for the sequence of user actions is a directed acyclic graph (DAG), storing an elementary operation or input tensor at each node. Such a representation clarifies dependencies, and allows to determine which operations can be executed in parallel.

A more subtle optimization strategy tailored to tensor networks is the merging of subsequent contractions, i.e., if the tensor resulting from a contraction is immediately used in another contraction. A toy model example consists of merging

Another optimization strategy is avoiding explicit transpositions (i.e., permutations of tensor dimensions), and aiming for advantageous dimension ordering. As mentioned, the transposition of a tensor resulting from a contraction can be integrated into the contraction (see also [^{T})^{T} = BA^{T}

The software has been extensively tested by comparing the tensor contraction and splitting indices computed by the software with the expected output, and by running the generated Python code (tested with Python 2.7.15 and 3.6.5, NumPy 1.13). The software should run on any modern web browser with JavaScript enabled (tested with Firefox 78.0, Chrome 84.0, Microsoft Edge 44.19041.1, Apple Safari 13.1.2).

Any, runs directly in a web browser.

HTML and JavaScript combined with the D3.js library (v5).

D3.js library (v5) (no installation necessary).

The generated Python code requires NumPy (version 1.6.0. or higher).

Lisa Sahlmann and Christian B. Mendl both designed and documented the software, and tested its functionality. Christian B. Mendl implemented the software and maintains the GitHub repository and associated web page.

English

The canonical use case of the GuiTeNet software is code generation for tensor network operations. In its present form, the software framework is well suited to handle a relatively small number of tensors, but manually constructing a network with hundreds of tensors is cumbersome. Instead, generating code for subroutines or blocks inside loops is a plausible scenario for employing GuiTeNet in larger software projects. As specific example, rather than instantiating all tensors of a matrix product state, the GuiTeNet framework could be used to generate a local contraction operation required during a left-right sweep over the chain.

We also want to point out the pedagogical value which GuiTeNet might offer, including the seamless transition from vectors and matrices to general tensors.

Still, there are many desirable features left for future work, including code generation for other programming languages and software libraries, the implementation of high-level optimization strategies as described above, or a timeline of previous network states (e.g., before a contraction) with associated

Technical support is available via the “issues” page of the GitHub repository, or by contacting one of the authors by email.

An interesting open question is how GuiTeNet could inspire or profit from software and hardware architectures tailored to tensor operations, like contractions beyond conventional BLAS routines [

During revision of this work we became aware of

CM likes to thank Lexing Ying for inspiring discussions.

The authors have no competing interests to declare.