ipex-llm/docs/mddocs/DockerGuides
Jun Wang cb7b08948b
update vllm-docker-quick-start for vllm0.6.2 (#12392)
* update vllm-docker-quick-start for vllm0.6.2

* [UPDATE] rm max-num-seqs parameter in vllm-serving script
2024-11-27 08:47:03 +08:00
..
docker_cpp_xpu_quickstart.md Decouple the openwebui and the ollama. in inference-cpp-xpu dockerfile (#12382) 2024-11-12 20:15:23 +08:00
docker_pytorch_inference_gpu.md Revert to use out-of-tree GPU driver (#11761) 2024-08-12 13:41:47 +08:00
docker_run_pytorch_inference_in_vscode.md Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00
docker_windows_gpu.md Further mddocs fixes (#11386) 2024-06-21 13:27:43 +08:00
fastchat_docker_quickstart.md Update mddocs for DockerGuides (#11380) 2024-06-21 12:10:35 +08:00
README.md Add index page for API doc & links update in mddocs (#11393) 2024-06-21 17:34:34 +08:00
vllm_cpu_docker_quickstart.md Add missing ragflow quickstart in mddocs and update legecy contents (#11385) 2024-06-21 12:28:26 +08:00
vllm_docker_quickstart.md update vllm-docker-quick-start for vllm0.6.2 (#12392) 2024-11-27 08:47:03 +08:00