ipex-llm/docker/llm/inference-cpp/start-ollama.sh
Wang, Jian4 86cec80b51
LLM: Add llm inference_cpp_xpu_docker (#10933)
* test_cpp_docker

* update

* update

* update

* update

* add sudo

* update nodejs version

* no need npm

* remove blinker

* new cpp docker

* restore

* add line

* add manually_build

* update and add mtl

* update for workdir llm

* add benchmark part

* update readme

* update 1024-128

* update readme

* update

* fix

* update

* update

* update readme too

* update readme

* no change

* update dir_name

* update readme
2024-05-15 11:10:22 +08:00

9 lines
177 B
Bash

# init ollama first
mkdir -p /llm/ollama
cd /llm/ollama
init-ollama
export OLLAMA_NUM_GPU=999
export ZES_ENABLE_SYSMAN=1
# start ollama service
(./ollama serve > ollama.log) &