๐ซ Archive (successor: aioway)
Koila is now built into a sub component of aioway, check it out!
Project koila has 3 main components:
- Metadata tracking for
torch.Tensors. - Decouple some symbolic info (batch size) to run a reduced graph
- Run with gradient accumulation to prevent OOM.
For 1.: Now PyTorch officially has FakeTensor (koila predates it). It has great compatibility and support of torch operators, something koila never was able to do.
For 2.: Koila only tracks symbolic info partially, on the batch dimension. I have now something a lot better, a compiler / interpreter for deep learning aioway that handles all these info, without the burden of torch compatibility. As I have plans to rewrite this tracking part in C++, I don't want to cross the repository boundary as koila is not going to have a native API, so this part would be rewritten in aioway.
For 3.: Koila requests batches iteratively, adhering to the torch API and work around some layers that conflicts with gradient accumulation.
Now:
FakeTensoris available (it wasn't before), which support all oftorch.Aiowayin development (soon open source), and there are stuff thatAiowaydoes butkoilacannot (how do you go 1 level up, and work on layers, when you API is simplyTensorinput?)- Torch has too many operators, and this can (kind of?) be achieved with other measures, such as
tryexceptwith binary search, which is super simple, low maintainence, and not that slow.
I think to keep koila as it is, a POC that I did for fun.
Koila solves
CUDA error: out of memory errorpainlessly. Fix it with just one line of code, and forget it.
Main branch is a complete re-structure of the project (that is currently mostly empty due to me not having enough time to complete it). To see working code, checkout the v0.1.1 tag for a proof of concept (that doesn't have full support over all operations and is not suited for production). To use it, download release v0.1.1 here.
-
๐ Prevents
CUDA error: out of memory errorwith one single line of code. -
๐งฎ Without touching the main logic.
-
โ๏ธ Automatically accumulates gradients when batch sizes are too large.
-
๐ฆฅ Lazily evaluates PyTorch code to save computing power.
-
โ๏ธ Automatically splits along the batch dimension to more GPU friendly numbers (2's powers) to speed up the execution.
-
๐ค Minimal API (wrapping all inputs will be enough).
Ever encountered RuntimeError: CUDA error: out of memory?
We all love PyTorch because of its speed, efficiency, and transparency, but that means it doesn't do extra things. Things like preventing a very common error that has been bothering many users since 2017.
This library aims to prevent that by being a light-weight wrapper over native PyTorch. When a tensor is wrapped, the library automatically computes the amount of remaining GPU memory and uses the right batch size, saving everyone from having to manually fine-tune the batch size whenever a model is used.
Also, the library automatically uses the right batch size to GPU. Did you know that using bigger batches doesn't always speed up processing? It's handled automatically in this library too.
Because Koila code is PyTorch code, as it runs PyTorch under the hood, you can use both together without worrying compatibility.
Oh, and all that in 1 line of code! ๐
Koila is available on PyPI. To install, run the following command.
pip install koilaThe usage is dead simple. For example, you have the following PyTorch code (copied from PyTorch's tutorial)
Define the input, label, and model:
# A batch of MNIST image
input = torch.randn(8, 28, 28)
# A batch of labels
label = torch.randn(0, 10, [8])
class NeuralNetwork(Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = Flatten()
self.linear_relu_stack = Sequential(
Linear(28 * 28, 512),
ReLU(),
Linear(512, 512),
ReLU(),
Linear(512, 10),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logitsDefine the loss function, calculate output and losses.
loss_fn = CrossEntropyLoss()
# Calculate losses
out = nn(t)
loss = loss_fn(out, label)
# Backward pass
nn.zero_grad()
loss.backward()Ok. How to adapt the code to use Koila's features?
You add this line of code (as of v0.1.1):
# Wrap the input tensor and label tensor.
# If a batch argument is provided, that dimension of the tensor would be treated as the batch.
# In this case, the first dimension (dim=0) is used as batch's dimension.
(input, label) = lazy(input, label, batch=0)Done. You will not run out of memory again.
CUDA error: out of memory generally happens in forward pass, because temporary variables will need to be saved in memory.
Koila is a thin wrapper around PyTorch. It is inspired by TensorFlow's static/lazy evaluation. By building the graph first, and run the model only when necessarily, the model has access to all the information necessarily to determine how much resources is really need to compute the model.
In terms of memory usage, only shapes of temporary variables are required to calculate the memory usage of those variables used in the model. For example, + takes in two tensors with equal sizes, and outputs a tensor with a size equal to the input size, and log takes in one tensor, and outputs another tensor with the same shape. Broadcasting makes it a little more complicated than that, but the general ideas are the same. By tracking all these shapes, one could easily tell how much memory is used in a forward pass. And select the optimal batch size accordingly.
NO. Indeed, calculating shapes and computing the size and memory usage sound like a lot of work. However, keep in mind that even a gigantic model like GPT-3, which has 96 layers, has only a few hundred nodes in its computing graph. Because Koila's algorithms run in linear time, any modern computer will be able to handle a graph like this instantly.
Most of the computing is spent on computing individual tensors, and transferring tensors across devices. And bear in mind that those checks happen in vanilla PyTorch anyways. So no, not slow at all.
This project was originally named koala, the laziest species in the world, and this project is about lazy evaluation of tensors. However, as that name is taken on PyPI, I had no choice but to use another name. Koila is a word made up by me, pronounced similarly to voila (It's a French word), so sounds like koala.
If you like what you see, please consider giving this a star (โ )!
Why did I go through the trouble and build this project, despite a lot of similar libraries on the internet?
Batch size search is not new. In fact, the mighty popular Lightning has it.
Lightning's batch size search is deeply integrated in its own ecosystem. You have to use its DataLoader, subclass from their models, and train your models accordingly. While refactoring supervised learning tasks to use lightning is relatively easy, it's really painful to do the same with a reinforcement learning code base, where interacting with the environment is a must.
In comparison, because Koila is a super lightweight PyTorch wrapper, it works when PyTorch works, thus providing maximum flexibility and minimal changes to existing code.
However, note that in the case where you're writing new code, Lightning is recommended as it enforces a better pattern of code style, which would benefit modularity in the long run.
Likewise, passing an empty tensor to build a computational graph (AKA static graph) isn't a new idea, but thoroughly explored in the popular TensorFlow library, and a similar PyTorch wrapper library KeOps. These libraries suffer from the fact that debugging programs in them is unnecessarily complicated. For example, TensorFlow was known for its ease of deployment but pain in development, to the point that users switched to PyTorch. During debugging, people like to see what's inside a variable, to see if it contains an incorrect value. However, because static graphs only define relations, the values are not computed, thus making debugging difficult.
Koila solves that by eagerly evaluating when being converted to strings, integers, or any Python values. This enables seamless debugging while maintaining the ability to perform memory management that simply isn't available for a more straight forward PyTorch program, which dynamically (when needed) allocates and frees memory on the fly.
- ๐ Simplify internal workings even further. (Especially interaction between
Tensors andLazyTensors). - ๐งฉ Provide an extensible API to write custom functions for the users.
- ๐ช Work with multiple GPUs.
The code works on many cases, but it's still a work in progress. This is not (yet) a fully PyTorch compatible library due to limited time. Avoid using it in production environments!
Openness and inclusiveness are taken very seriously. The code is available under Apache License. Please follow the following Code of Conduct.