ipex-llm/docker/llm/inference-cpp/start-llama-cpp.sh
Wang, Jian4 86cec80b51
LLM: Add llm inference_cpp_xpu_docker (#10933)
* test_cpp_docker

* update

* update

* update

* update

* add sudo

* update nodejs version

* no need npm

* remove blinker

* new cpp docker

* restore

* add line

* add manually_build

* update and add mtl

* update for workdir llm

* add benchmark part

* update readme

* update 1024-128

* update readme

* update

* fix

* update

* update

* update readme too

* update readme

* no change

* update dir_name

* update readme
2024-05-15 11:10:22 +08:00

8 lines
225 B
Bash

# init llama-cpp first
mkdir -p /llm/llama-cpp
cd /llm/llama-cpp
init-llama-cpp
# change the model_path to run
model="/models/mistral-7b-v0.1.Q4_0.gguf"
./main -m $model -n 32 --prompt "What is AI?" -t 8 -e -ngl 999 --color