change bigdl-llm-tutorial to ipex-llm-tutorial in README (#10547)
* update bigdl-llm-tutorial to ipex-llm-tutorial * change to ipex-llm-tutorial
This commit is contained in:
parent
bb9be70105
commit
2ecd737474
2 changed files with 4 additions and 4 deletions
|
|
@ -30,7 +30,7 @@
|
|||
- [2023/10] `ipex-llm` now supports [QLoRA finetuning](python/llm/example/GPU/LLM-Finetuning/QLoRA) on both Intel [GPU](python/llm/example/GPU/LLM-Finetuning/QLoRA) and [CPU](python/llm/example/CPU/QLoRA-FineTuning).
|
||||
- [2023/10] `ipex-llm` now supports [FastChat serving](python/llm/src/ipex_llm/llm/serving) on on both Intel CPU and GPU.
|
||||
- [2023/09] `ipex-llm` now supports [Intel GPU](python/llm/example/GPU) (including iGPU, Arc, Flex and MAX).
|
||||
- [2023/09] `ipex-llm` [tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) is released.
|
||||
- [2023/09] `ipex-llm` [tutorial](https://github.com/intel-analytics/ipex-llm-tutorial) is released.
|
||||
|
||||
</details>
|
||||
|
||||
|
|
@ -108,7 +108,7 @@ See the ***optimized performance*** of `chatglm2-6b` and `llama-2-13b-chat` mode
|
|||
- [LlamaIndex](python/llm/example/GPU/LlamaIndex)
|
||||
- [AutoGen](python/llm/example/CPU/Applications/autogen)
|
||||
- [ModeScope](python/llm/example/GPU/ModelScope-Models)
|
||||
- [Tutorials](https://github.com/intel-analytics/bigdl-llm-tutorial)
|
||||
- [Tutorials](https://github.com/intel-analytics/ipex-llm-tutorial)
|
||||
|
||||
*For more details, please refer to the `ipex-llm` document [website](https://ipex-llm.readthedocs.io/).*
|
||||
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ Latest update 🔥
|
|||
* [2023/10] ``ipex-llm`` now supports `QLoRA finetuning <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LLM-Finetuning/QLoRA>`_ on both Intel `GPU <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LLM-Finetuning/QLoRA>`_ and `CPU <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/QLoRA-FineTuning>`_.
|
||||
* [2023/10] ``ipex-llm`` now supports `FastChat serving <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/src/ipex-llm/llm/serving>`_ on on both Intel CPU and GPU.
|
||||
* [2023/09] ``ipex-llm`` now supports `Intel GPU <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU>`_ (including iGPU, Arc, Flex and MAX).
|
||||
* [2023/09] ``ipex-llm`` `tutorial <https://github.com/intel-analytics/bigdl-llm-tutorial>`_ is released.
|
||||
* [2023/09] ``ipex-llm`` `tutorial <https://github.com/intel-analytics/ipex-llm-tutorial>`_ is released.
|
||||
|
||||
************************************************
|
||||
``ipex-llm`` Demos
|
||||
|
|
@ -164,7 +164,7 @@ Code Examples
|
|||
* `AutoGen <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/Applications/autogen>`_
|
||||
* `ModeScope <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/ModelScope-Models>`_
|
||||
|
||||
* `Tutorials <https://github.com/intel-analytics/bigdl-llm-tutorial>`_
|
||||
* `Tutorials <https://github.com/intel-analytics/ipex-llm-tutorial>`_
|
||||
|
||||
|
||||
.. seealso::
|
||||
|
|
|
|||
Loading…
Reference in a new issue