feat(ai): do not release vram immediately after run
This commit is contained in:
parent
5e47f12b04
commit
2ffe81d56a
3 changed files with 3 additions and 3 deletions
|
|
@ -34,4 +34,4 @@ else
|
|||
OLLAMA_HOST=$host ollama run $model --hidethinking
|
||||
fi
|
||||
|
||||
OLLAMA_HOST=$host ollama stop $model
|
||||
# OLLAMA_HOST=$host ollama stop $model
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
. ${HOME}/ayo.conf
|
||||
|
||||
host=$ollama_remote_host
|
||||
model="gemma3:4b"
|
||||
model="helper:latest"
|
||||
|
||||
TEMP=$(mktemp)
|
||||
vim $TEMP
|
||||
|
|
|
|||
2
ai.sh
2
ai.sh
|
|
@ -65,4 +65,4 @@ else
|
|||
OLLAMA_HOST=$host ollama run $model --hidethinking
|
||||
fi
|
||||
|
||||
OLLAMA_HOST=$host ollama stop $model
|
||||
# OLLAMA_HOST=$host ollama stop $model
|
||||
|
|
|
|||
Loading…
Reference in a new issue