ipex-llm/python/llm/example
Jinhe d0c89fb715
updated llama.cpp and ollama quickstart (#11732)
* updated llama.cpp and ollama quickstart.md

* added qwen2-1.5B sample output

* revision on quickstart updates

* revision on quickstart updates

* revision on qwen2 readme

* added 2 troubleshoots“
”

* troubleshoot revision
2024-08-08 11:04:01 +08:00
..
CPU upgrade glm-4v example transformers version (#11719) 2024-08-06 14:55:09 +08:00
GPU updated llama.cpp and ollama quickstart (#11732) 2024-08-08 11:04:01 +08:00
NPU/HF-Transformers-AutoModels Switch to conhost when running on NPU (#11687) 2024-07-30 17:08:06 +08:00