14 lines
496 B
ReStructuredText
14 lines
496 B
ReStructuredText
GPU Supports
|
|
================================
|
|
|
|
IPEX-LLM not only supports running large language models for inference, but also supports QLoRA finetuning on Intel GPUs.
|
|
|
|
* |inference_on_gpu|_
|
|
* `Finetune (QLoRA) <./finetune.html>`_
|
|
* `Multi GPUs selection <./multi_gpus_selection.html>`_
|
|
|
|
.. |inference_on_gpu| replace:: Inference on GPU
|
|
.. _inference_on_gpu: ./inference_on_gpu.html
|
|
|
|
.. |multi_gpus_selection| replace:: Multi GPUs selection
|
|
.. _multi_gpus_selection: ./multi_gpus_selection.html
|