# GPU Supports IPEX-LLM not only supports running large language models for inference, but also supports QLoRA finetuning on Intel GPUs. * [Inference on GPU](./inference_on_gpu.md) * [Finetune (QLoRA)](./finetune.md) * [Multi GPUs selection](./multi_gpus_selection.md)