* Update install_linux_gpu.zh-CN.md
Add the link for guide of windows installation.
* Update install_windows_gpu.zh-CN.md
Add the link for guide of linux installation.
* Update install_windows_gpu.md
Add the link for guide of Linux installation.
* Update install_linux_gpu.md
Add the link for guide of Windows installation.
* Update install_linux_gpu.md
Modify based on comments.
* Update install_windows_gpu.md
Modify based on comments
* Add initial NPU quickstart (c++ part unfinished)
* Small update
* Update based on comments
* Update main readme
* Remove LLaMA description
* Small fix
* Small fix
* Remove subsection link in main README
* Small fix
* Update based on comments
* Small fix
* TOC update and other small fixes
* Update for Chinese main readme
* Update based on comments and other small fixes
* Change order
* Add install_linux_gpu.zh-CN.md
* Add install_windows_gpu.zh-CN.md
* Update llama_cpp_quickstart.zh-CN.md
Related links updated to zh-CN version.
* Update install_linux_gpu.zh-CN.md
Added link to English version.
* Update install_windows_gpu.zh-CN.md
Add the link to English version.
* Update install_windows_gpu.md
Add the link to CN version.
* Update install_linux_gpu.md
Add the link to CN version.
* Update README.zh-CN.md
Modified the related link to zh-CN version.
* remove the openwebui in inference-cpp-xpu dockerfile
* update docker_cpp_xpu_quickstart.md
* add sample output in inference-cpp/readme
* remove the openwebui in main readme
* remove the openwebui in main readme
* Add ollama_quickstart.zh-CN.md
Add ollama_quickstart.zh-CN.md
* Update ollama_quickstart.zh-CN.md
Add Chinese and English switching
* Update ollama_quickstart.md
Add Chinese and English switching
* Update README.zh-CN.md
Modify the related link to ollama_quickstart.zh-CN.md
* Update ollama_quickstart.zh-CN.md
Modified based on comments.
* Update ollama_quickstart.zh-CN.md
Modified based on comments
* Update Linux prerequisites installation guide for MTL iGPU
* Further link update
* Small fixes
* Small fix
* Update based on comments
* Small fix
* Make oneAPI installation a shared section for both MTL iGPU and other GPU
* Small fix
* Small fix
* Clarify description
* [ADD] rewrite new vllm docker quick start
* [ADD] lora adapter doc finished
* [ADD] mulit lora adapter test successfully
* [ADD] add ipex-llm quantization doc
* [UPDATE] update mmdocs vllm_docker_quickstart content
* [REMOVE] rm tmp file
* [UPDATE] tp and pp explaination and readthedoc link change
* [FIX] fix the error description of tp+pp and quantization part
* [FIX] fix the table of verifed model
* [UPDATE] add full low bit para list
* [UPDATE] update the load_in_low_bit params to verifed dtype
* update on readme after ipex-llm update
* update on readme after ipex-llm update
* rebase & delete redundancy
* revise
* add numbers for troubleshooting
* Revert to use out-of-tree GPU driver since the performance with out-of-tree driver is better than upsteam's
* add spaces
* add troubleshooting case
* update Troubleshooting