update oneapi usage in cpp quickstart (#10836)

* update oneapi usage

* update

* small fix
This commit is contained in:
Ruonan Wang 2024-04-22 11:48:05 +08:00 committed by GitHub
parent ae3b577537
commit c6e868f7ad
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 12 additions and 16 deletions

View file

@ -29,13 +29,7 @@ Suppose you have downloaded a [Meta-Llama-3-8B-Instruct-Q4_K_M.gguf](https://hug
#### 1.3 Run Llama3 on Intel GPU using llama.cpp
##### Set Environment Variablesoptional
```eval_rst
.. note::
This is a required step on for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
```
##### Set Environment Variables
Configure oneAPI variables by running the following command:
@ -49,9 +43,14 @@ Configure oneAPI variables by running the following command:
.. tab:: Windows
.. note::
This is a required step for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
.. code-block:: bash
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
```
##### Run llama3
@ -126,7 +125,6 @@ Launch the Ollama service:
export ZES_ENABLE_SYSMAN=1
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export OLLAMA_NUM_GPU=999
# Below is a required step for APT or offline installed oneAPI. Skip below step for PIP-installed oneAPI.
source /opt/intel/oneapi/setvars.sh
./ollama serve

View file

@ -82,13 +82,7 @@ Then you can use following command to initialize `llama.cpp` with IPEX-LLM:
Here we provide a simple example to show how to run a community GGUF model with IPEX-LLM.
#### Set Environment Variablesoptional
```eval_rst
.. note::
This is a required step on for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
```
#### Set Environment Variables
Configure oneAPI variables by running the following command:
@ -102,9 +96,14 @@ Configure oneAPI variables by running the following command:
.. tab:: Windows
.. note::
This is a required step for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
.. code-block:: bash
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
```
#### Model Download

View file

@ -55,7 +55,6 @@ You may launch the Ollama service as below:
export OLLAMA_NUM_GPU=999
export no_proxy=localhost,127.0.0.1
export ZES_ENABLE_SYSMAN=1
# Below is a required step for APT or offline installed oneAPI. Skip below step for PIP-installed oneAPI.
source /opt/intel/oneapi/setvars.sh
./ollama serve