From c6e868f7ad06b210f83e554d8dec10f45e2fddac Mon Sep 17 00:00:00 2001 From: Ruonan Wang Date: Mon, 22 Apr 2024 11:48:05 +0800 Subject: [PATCH] update oneapi usage in cpp quickstart (#10836) * update oneapi usage * update * small fix --- .../llama3_llamacpp_ollama_quickstart.md | 14 ++++++-------- .../doc/LLM/Quickstart/llama_cpp_quickstart.md | 13 ++++++------- .../source/doc/LLM/Quickstart/ollama_quickstart.md | 1 - 3 files changed, 12 insertions(+), 16 deletions(-) diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md index 813e81bc..e8cb3cb3 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/llama3_llamacpp_ollama_quickstart.md @@ -29,13 +29,7 @@ Suppose you have downloaded a [Meta-Llama-3-8B-Instruct-Q4_K_M.gguf](https://hug #### 1.3 Run Llama3 on Intel GPU using llama.cpp -##### Set Environment Variables(optional) - -```eval_rst -.. note:: - - This is a required step on for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. -``` +##### Set Environment Variables Configure oneAPI variables by running the following command: @@ -49,9 +43,14 @@ Configure oneAPI variables by running the following command: .. tab:: Windows + .. note:: + + This is a required step for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. + .. code-block:: bash call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" + ``` ##### Run llama3 @@ -126,7 +125,6 @@ Launch the Ollama service: export ZES_ENABLE_SYSMAN=1 export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 export OLLAMA_NUM_GPU=999 - # Below is a required step for APT or offline installed oneAPI. Skip below step for PIP-installed oneAPI. source /opt/intel/oneapi/setvars.sh ./ollama serve diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md index 14600853..1ecd5fd4 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/llama_cpp_quickstart.md @@ -82,13 +82,7 @@ Then you can use following command to initialize `llama.cpp` with IPEX-LLM: Here we provide a simple example to show how to run a community GGUF model with IPEX-LLM. -#### Set Environment Variables(optional) - -```eval_rst -.. note:: - - This is a required step on for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. -``` +#### Set Environment Variables Configure oneAPI variables by running the following command: @@ -102,9 +96,14 @@ Configure oneAPI variables by running the following command: .. tab:: Windows + .. note:: + + This is a required step for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. + .. code-block:: bash call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" + ``` #### Model Download diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md index fc899216..a043aa74 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md @@ -55,7 +55,6 @@ You may launch the Ollama service as below: export OLLAMA_NUM_GPU=999 export no_proxy=localhost,127.0.0.1 export ZES_ENABLE_SYSMAN=1 - # Below is a required step for APT or offline installed oneAPI. Skip below step for PIP-installed oneAPI. source /opt/intel/oneapi/setvars.sh ./ollama serve