Experimenting with PyTorch on Intel architecture
Find a file
2025-09-02 22:52:28 +02:00
getting-started chore: organize code 2025-09-02 22:52:28 +02:00
inference-examples chore: organize code 2025-09-02 21:24:39 +02:00
.gitignore initial commit 2025-09-02 20:09:03 +02:00
check-xpu.py chore: organize code 2025-09-02 21:24:39 +02:00
env.sh feat: add inference examples 2025-09-02 21:21:02 +02:00
README.md chore: update readme 2025-09-02 22:51:42 +02:00

Learn PyTorch

Important

Steps here to run PyTorch locally are specific to machines with Intel XPUs.

Experimenting with PyTorch using Intel architecture (i.e., Intel Core Ultra processor with iGPU).

After installing ipex-llm which is required to use Intel GPUs (see Setup), you will have access to conda and be able to import torch normally.

The XPU is an accelerator for working with Tensors.

Setup

  1. Install IPEX-LLM on Intel GPU with PyTorch 2.6

  2. Clone repo

$ git clone https://git.ayo.run/ayo/learn-pytorch
  1. Run env.sh to activate the conda environment and set
$ cd learn-python
$ . env.sh
  1. (Optional) Confirm if XPU is detected
$ python # go intou the python shell

$ import torch
$ torch.xpu.is_available()
$ torch.xpu.get_device_name()