fix format/typo (#11067)
This commit is contained in:
parent
31ce3e0c13
commit
dfac168d5f
1 changed files with 1 additions and 1 deletions
|
|
@ -24,7 +24,7 @@ This section includes efficient guide to show you how to:
|
||||||
* `Run Ollama with IPEX-LLM on Intel GPU <./ollama_quickstart.html>`_
|
* `Run Ollama with IPEX-LLM on Intel GPU <./ollama_quickstart.html>`_
|
||||||
* `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_
|
* `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_
|
||||||
* `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_
|
* `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_
|
||||||
* `Run IPEX-LLM Serving wit vLLM on Intel GPU<./vLLM_quickstart.html>`_
|
* `Run IPEX-LLM Serving with vLLM on Intel GPU <./vLLM_quickstart.html>`_
|
||||||
* `Finetune LLM with Axolotl on Intel GPU <./axolotl_quickstart.html>`_
|
* `Finetune LLM with Axolotl on Intel GPU <./axolotl_quickstart.html>`_
|
||||||
* `Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi <./deepspeed_autotp_fastapi_quickstart.html>`_
|
* `Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi <./deepspeed_autotp_fastapi_quickstart.html>`_
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue