README update (API doc and FAQ and minor fixes) (#11397)
* add faq and API doc link in README.md * add missing quickstart link * update links in FAQ * update links in FAQ * update faq * update faq text
This commit is contained in:
parent
0c67639539
commit
475b0213d2
2 changed files with 21 additions and 2 deletions
|
|
@ -162,6 +162,7 @@ Please see the **Perplexity** result below (tested on Wikitext dataset using the
|
|||
### Docker
|
||||
- [GPU Inference in C++](docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md): running `llama.cpp`, `ollama`, `OpenWebUI`, etc., with `ipex-llm` on Intel GPU
|
||||
- [GPU Inference in Python](docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md) : running HuggingFace `transformers`, `LangChain`, `LlamaIndex`, `ModelScope`, etc. with `ipex-llm` on Intel GPU
|
||||
- [VSCode Guide on GPU](docs/readthedocs/source/doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.md): running and developing Python LLM applications using VSCode on Intel GPU
|
||||
- [vLLM on GPU](docs/mddocs/DockerGuides/vllm_docker_quickstart.md): running `vLLM` serving with `ipex-llm` on Intel GPU
|
||||
- [FastChat on GPU](docs/mddocs/DockerGuides/fastchat_docker_quickstart.md): running `FastChat` serving with `ipex-llm` on Intel GPU
|
||||
|
||||
|
|
@ -219,6 +220,13 @@ Please see the **Perplexity** result below (tested on Wikitext dataset using the
|
|||
- [Tutorials](https://github.com/intel-analytics/ipex-llm-tutorial)
|
||||
|
||||
|
||||
## API Doc
|
||||
|
||||
- [HuggingFace Transformers-style API (Auto Classes)](docs/mddocs/PythonAPI/transformers.md)
|
||||
- [API for arbitrary PyTorch Model](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/PythonAPI/optimize.md)
|
||||
|
||||
## FAQ
|
||||
- [FAQ & Trouble Shooting](docs/mddocs/Overview/FAQ/faq.md)
|
||||
|
||||
## Verified Models
|
||||
Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaMA2, Mistral, Mixtral, Gemma, LLaVA, Whisper, ChatGLM2/ChatGLM3, Baichuan/Baichuan2, Qwen/Qwen-1.5, InternLM* and more; see the list below.
|
||||
|
|
|
|||
|
|
@ -10,8 +10,19 @@ Please also refer to [here](https://github.com/intel-analytics/ipex-llm?tab=read
|
|||
|
||||
## How to Resolve Errors
|
||||
|
||||
### Fail to install `ipex-llm` through `pip install --pre --upgrade ipex-llm[xpu] --extra-index-urlhttps://pytorch-extension.intel.com/release-whl/stable/xpu/us/` or `pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/`
|
||||
You could try to install IPEX-LLM dependencies for Intel XPU from source archives:
|
||||
### Fail to install `ipex-llm` via `pip` on Intel GPU
|
||||
|
||||
If you encounter errors when installing `ipex-llm` on Intel GPU using either
|
||||
|
||||
```python
|
||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||
```
|
||||
or
|
||||
```python
|
||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
|
||||
```
|
||||
|
||||
You can try install `ipex-llm` dependencies from source archives:
|
||||
- For Windows system, refer to [here](../install_gpu.md#install-ipex-llm-from-wheel) for the steps.
|
||||
- For Linux system, refer to [here](../install_gpu.md#prerequisites-1) for the steps.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue