torch outside the box

Torch R

Sometimes, a software’s best feature is the one you’ve added yourself. This post shows by example why you may want to extend torch, and how to proceed. It also explains a bit of what is going on in the background.

Sigrid Keydana (RStudio)https://www.rstudio.com/
04-27-2022

For better or worse, we live in an ever-changing world. Focusing on the better, one salient example is the abundance, as well as rapid evolution of software that helps us achieve our goals. With that blessing comes a challenge, though. We need to be able to actually use those new features, install that new library, integrate that novel technique into our package.

With torch, there’s so much we can accomplish as-is, only a tiny fraction of which has been hinted at on this blog. But if there’s one thing to be sure about, it’s that there never, ever will be a lack of demand for more things to do. Here are three scenarios that come to mind.

This post will illustrate each of these use cases in order. From a practical point of view, this constitutes a gradual move from a user’s to a developer’s perspective. But behind the scenes, it’s really the same building blocks powering them all.

Enablers: torchexport and Torchscript

The R package torchexport and (PyTorch-side) TorchScript operate on very different scales, and play very different roles. Nevertheless, both of them are important in this context, and I’d even say that the “smaller-scale” actor (torchexport) is the truly essential component, from an R user’s point of view. In part, that’s because it figures in all of the three scenarios, while TorchScript is involved only in the first.

torchexport: Manages the “type stack” and takes care of errors

In R torch, the depth of the “type stack” is dizzying. User-facing code is written in R; the low-level functionality is packaged in libtorch, a C++ shared library relied upon by torch as well as PyTorch. The mediator, as is so often the case, is Rcpp. However, that is not where the story ends. Due to OS-specific compiler incompatibilities, there has to be an additional, intermediate, bidirectionally-acting layer that strips all C++ types on one side of the bridge (Rcpp or libtorch, resp.), leaving just raw memory pointers, and adds them back on the other.1 In the end, what results is a pretty involved call stack. As you could imagine, there is an accompanying need for carefully-placed, level-adequate error handling, making sure the user is presented with usable information at the end.

Now, what holds for torch applies to every R-side extension that adds custom code, or calls external C++ libraries. This is where torchexport comes in. As an extension author, all you need to do is write a tiny fraction of the code required overall – the rest will be generated by torchexport. We’ll come back to this in scenarios two and three.

TorchScript: Allows for code generation “on the fly”

We’ve already encountered TorchScript in a prior post, albeit from a different angle, and highlighting a different set of terms. In that post, we showed how you can train a model in R and trace it, resulting in an intermediate, optimized representation that may then be saved and loaded in a different (possibly R-less) environment. There, the conceptual focus was on the agent enabling this workflow: the PyTorch Just-in-time Compiler (JIT) which generates the representation in question. We quickly mentioned that on the Python-side, there is another way to invoke the JIT: not on an instantiated, “living” model, but on scripted model-defining code. It is that second way, accordingly named scripting, that is relevant in the current context.

Even though scripting is not available from R (unless the scripted code is written in Python2), we still benefit from its existence. When Python-side extension libraries use TorchScript (instead of normal C++ code), we don’t need to add bindings to the respective functions on the R (C++) side. Instead, everything is taken care of by PyTorch.

This – although completely transparent to the user – is what enables scenario one. In (Python) TorchVision, the pre-trained models provided will often make use of (model-dependent) special operators. Thanks to their having been scripted, we don’t need to add a binding for each operator, let alone re-implement them on the R side.

Having outlined some of the underlying functionality, we now present the scenarios themselves.

Scenario one: Load a TorchVision pre-trained model

Perhaps you’ve already used one of the pre-trained models made available by TorchVision: A subset of these have been manually ported to torchvision, the R package. But there are more of them – a lot more. Many use specialized operators – ones seldom needed outside of some algorithm’s context. There would appear to be little use in creating R wrappers for those operators. And of course, the continual appearance of new models would require continual porting efforts, on our side.

Luckily, there is an elegant and effective solution. All the necessary infrastructure is set up by the lean, dedicated-purpose package torchvisionlib. (It can afford to be lean due to the Python side’s liberal use of TorchScript, as explained in the previous section. But to the user – whose perspective I’m taking in this scenario – these details do not need to matter.)

Once you’ve installed and loaded torchvisionlib, you have the choice among an impressive number of image recognition-related models. The process, then, is two-fold:

  1. You instantiate the model in Python, script it, and save it.

  2. You load and use the model in R.

Here is the first step. Note how, before scripting, we put the model into eval mode, thereby making sure all layers exhibit inference-time behavior.

import torch
import torchvision

model = torchvision.models.segmentation.fcn_resnet50(pretrained = True)
model.eval()

scripted_model = torch.jit.script(model)
torch.jit.save(scripted_model, "fcn_resnet50.pt")

The second step is even shorter: Loading the model into R requires a single line.

library(torchvisionlib)

model <- torch::jit_load("fcn_resnet50.pt")

At this point, you can use the model to obtain predictions, or even integrate it as a building block into a larger architecture.

Scenario two: Implement a custom module

Wouldn’t it be wonderful if every new, well-received algorithm, every promising novel variant of a layer type, or – better still – the algorithm you have in mind to reveal to the world in your next paper was already implemented in torch?

Well, maybe; but maybe not. The far more sustainable solution is to make it reasonably easy to extend torch in small, dedicated packages that each serve a clear-cut purpose, and are fast to install. A detailed and practical walkthrough of the process is provided by the package lltm. This package has a recursive touch to it. At the same time, it is an instance of a C++ torch extension, and serves as a tutorial showing how to create such an extension.

The README itself explains how the code should be structured, and why. If you’re interested in how torch itself has been designed, this is an elucidating read, regardless of whether or not you plan on writing an extension. In addition to that kind of behind-the-scenes information, the README has step-by-step instructions on how to proceed in practice. In line with the package’s purpose, the source code, too, is richly documented.

As already hinted at in the “Enablers” section, the reason I dare write “make it reasonably easy” (referring to creating a torch extension) is torchexport, the package that auto-generates conversion-related and error-handling C++ code on several layers in the “type stack”. Typically, you’ll find the amount of auto-generated code significantly exceeds that of the code you wrote yourself.

Scenario three: Interface to PyTorch extensions built in/on C++ code

It is anything but unlikely that, some day, you’ll come across a PyTorch extension that you wish were available in R. In case that extension were written in Python (exclusively), you’d translate it to R “by hand”, making use of whatever applicable functionality torch provides. Sometimes, though, that extension will contain a mixture of Python and C++ code. Then, you’ll need to bind to the low-level, C++ functionality in a manner analogous to how torch binds to libtorch – and now, all the typing requirements described above will apply to your extension in just the same way.

Again, it’s torchexport that comes to the rescue. And here, too, the lltm README still applies; it’s just that in lieu of writing your custom code, you’ll add bindings to externally-provided C++ functions. That done, you’ll have torchexport create all required infrastructure code.

A template of sorts can be found in the torchsparse package (currently under development). The functions in csrc/src/torchsparse.cpp all call into PyTorch Sparse, with function declarations found in that project’s csrc/sparse.h.

Once you’re integrating with external C++ code in this way, an additional question may pose itself. Take an example from torchsparse. In the header file, you’ll notice return types such as std::tuple<torch::Tensor, torch::Tensor>, <torch::Tensor, torch::Tensor, <torch::optional<torch::Tensor>>, torch::Tensor>> … and more. In R torch (the C++ layer) we have torch::Tensor, and we have torch::optional<torch::Tensor>, as well. But we don’t have a custom type for every possible std::tuple you could construct. Just as having base torch provide all kinds of specialized, domain-specific functionality is not sustainable, it makes little sense for it to try to foresee all kinds of types that will ever be in demand.3

Accordingly, types should be defined in the packages that need them. How exactly to do this is explained in the torchexport Custom Types vignette. When such a custom type is being used, torchexport needs to be told how the generated types, on various levels, should be named. This is why in such cases, instead of a terse //[[torch::export]], you’ll see lines like / [[torch::export(register_types=c("tensor_pair", "TensorPair", "void*", "torchsparse::tensor_pair"))]]. The vignette explains this in detail.

What’s next

“What’s next” is a common way to end a post, replacing, say, “Conclusion” or “Wrapping up”. But here, it’s to be taken quite literally. We hope to do our best to make using, interfacing to, and extending torch as effortless as possible. Therefore, please let us know about any difficulties you’re facing, or problems you incur. Just create an issue in torchexport, lltm, torch, or whatever repository seems applicable.

As always, thanks for reading!

Photo by Antonino Visalli on Unsplash


  1. For an architecture overview of torch and torch extensions, cf. the README of lltm, a “tutorial package” created to demonstrate the process of extending torch. We’ll refer to that package explicitly in scenario two.↩︎

  2. Cf. the TorchScript vignette if you are interested in doing this.↩︎

  3. Maybe you’re wondering why just working with <std::tuple> will not do. It would, in principle, if there weren’t the requirement of an intermediate stage devoid of all type information. But having to have such a layer means having to be able to convert between typed objects and pointers, in both directions, and for that, all types need to be explicitly addressable.↩︎

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Keydana (2022, April 27). Posit AI Blog: torch outside the box. Retrieved from https://blogs.rstudio.com/tensorflow/posts/2022-04-27-torch-outside-the-box/

BibTeX citation

@misc{keydanatorchoutbox,
  author = {Keydana, Sigrid},
  title = {Posit AI Blog: torch outside the box},
  url = {https://blogs.rstudio.com/tensorflow/posts/2022-04-27-torch-outside-the-box/},
  year = {2022}
}