* updated link * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed * converted to md format, need to be reviewed, deleted some leftover texts * converted to md file type, need to be reviewed * converted to md file type, need to be reviewed * testing Github Tags * testing Github Tags * added Github Tags * added Github Tags * added Github Tags * Small fix * Small fix * Small fix * Small fix * Small fix * Further fix * Fix index * Small fix * Fix --------- Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
7 lines
No EOL
270 B
Markdown
7 lines
No EOL
270 B
Markdown
# GPU Supports
|
|
|
|
IPEX-LLM not only supports running large language models for inference, but also supports QLoRA finetuning on Intel GPUs.
|
|
|
|
* [Inference on GPU](./inference_on_gpu.md)
|
|
* [Finetune (QLoRA)](./finetune.md)
|
|
* [Multi GPUs selection](./multi_gpus_selection.md) |