* Add install_linux_gpu.zh-CN.md
* Add install_windows_gpu.zh-CN.md
* Update llama_cpp_quickstart.zh-CN.md
Related links updated to zh-CN version.
* Update install_linux_gpu.zh-CN.md
Added link to English version.
* Update install_windows_gpu.zh-CN.md
Add the link to English version.
* Update install_windows_gpu.md
Add the link to CN version.
* Update install_linux_gpu.md
Add the link to CN version.
* Update README.zh-CN.md
Modified the related link to zh-CN version.
* remove the openwebui in inference-cpp-xpu dockerfile
* update docker_cpp_xpu_quickstart.md
* add sample output in inference-cpp/readme
* remove the openwebui in main readme
* remove the openwebui in main readme
* Add ollama_quickstart.zh-CN.md
Add ollama_quickstart.zh-CN.md
* Update ollama_quickstart.zh-CN.md
Add Chinese and English switching
* Update ollama_quickstart.md
Add Chinese and English switching
* Update README.zh-CN.md
Modify the related link to ollama_quickstart.zh-CN.md
* Update ollama_quickstart.zh-CN.md
Modified based on comments.
* Update ollama_quickstart.zh-CN.md
Modified based on comments