ipex-llm/docs/readthedocs/source/doc/LLM/Quickstart
Shaojun Liu 7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970)
* add entrypoint.sh

* add quickstart

* remove entrypoint

* update

* Install related library of benchmarking

* update

* print out results

* update docs

* minor update

* update

* update quickstart

* update

* update

* update

* update

* update

* update

* add chat & example section

* add more details

* minor update

* rename quickstart

* update

* minor update

* update

* update config.yaml

* update readme

* use --gpu

* add tips

* minor update

* update
2024-05-14 12:58:31 +08:00
..
axolotl_quickstart.md Refine axolotl quickstart (#10957) 2024-05-08 09:34:02 +08:00
benchmark_quickstart.md Minior fix for quick start (#10857) 2024-04-23 15:22:01 +08:00
bigdl_llm_migration.md Minior fix for quick start (#10857) 2024-04-23 15:22:01 +08:00
chatchat_quickstart.md make images clickable (#10939) 2024-05-06 20:24:15 +08:00
continue_quickstart.md update quickstart (#10923) 2024-04-30 18:19:31 +08:00
deepspeed_autotp_fastapi_quickstart.md LLM: Refine README of AutoTP-FastAPI example (#10960) 2024-05-08 16:55:23 +08:00
dify_quickstart.md update private gpt quickstart and a small fix for dify (#10969) 2024-05-09 13:57:45 +08:00
docker_pytorch_inference_gpu.md Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970) 2024-05-14 12:58:31 +08:00
docker_windows_gpu.md Minior fix for quick start (#10857) 2024-04-23 15:22:01 +08:00
fastchat_quickstart.md LLM: Enable Speculative on Fastchat (#10909) 2024-05-06 10:06:20 +08:00
index.rst Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970) 2024-05-14 12:58:31 +08:00
install_linux_gpu.md Fix apt install oneapi scripts (#10891) 2024-04-26 16:39:37 +08:00
install_windows_gpu.md Small update for GPU configuration related doc (#10770) 2024-04-15 18:43:29 +08:00
llama3_llamacpp_ollama_quickstart.md update llama.cpp usage of llama3 (#10975) 2024-05-09 16:44:12 +08:00
llama_cpp_quickstart.md update troubleshooting of llama.cpp (#10990) 2024-05-13 11:18:38 +08:00
ollama_quickstart.md add version for llama.cpp and ollama (#10982) 2024-05-11 09:20:31 +08:00
open_webui_with_ollama_quickstart.md revise private GPT quickstart and a few fixes for other quickstart (#10967) 2024-05-08 21:18:20 +08:00
privateGPT_quickstart.md update private gpt quickstart and a small fix for dify (#10969) 2024-05-09 13:57:45 +08:00
webui_quickstart.md Small update for GPU configuration related doc (#10770) 2024-04-15 18:43:29 +08:00