From the course: PyTorch Essential Training: Deep Learning
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
Moving tensors between CPUs and GPUs
From the course: PyTorch Essential Training: Deep Learning
Moving tensors between CPUs and GPUs
- [Instructor] A few important reasons exist for moving the tenors between CPUs and GPUs. Let's explore them and see how to transfer the data from the CPU to GPU. By default, in PyTorch, all the data are in the CPU. In case we are training neural network, which is huge, we prefer to use GPU for faster training. For example, if we have high dimensional tensors that represent images, their computation intents, and take too much time if run over the CPU. So we need to transfer the data from the CPU to the GPU. Additionally, after the training, the output tensors are produced in GPU. Sometimes the output data requires pre-processing. Some pre-processing libraries don't support tensors, and expect an NumPy array. In that case, NumPy supports only data in the CPU, so there is a need to move the data from the CPU to the GPU. Luckily, tensors can be moved easily from the CPU to GPU device with the torch to method. We can call this method in one of the three ways. First way, tensor.cuda, or…