From 5e58f698cd2a41c0b20f7e4b08d2ade230898472 Mon Sep 17 00:00:00 2001 From: Jason Dai Date: Mon, 4 Sep 2023 15:42:16 +0800 Subject: [PATCH] Update readthedocs (#8882) --- README.md | 10 ++++----- docs/readthedocs/source/index.rst | 34 +++++++++++++++++++++++++------ python/llm/README.md | 6 +++--- 3 files changed, 36 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index e47e05d0..d83381fa 100644 --- a/README.md +++ b/README.md @@ -12,18 +12,18 @@ ### Latest update - `bigdl-llm` now supports Intel Arc or Flex GPU; see the the latest GPU examples [here](python/llm/example/gpu). -- `bigdl-llm` tutorial is made availabe [here](https://github.com/intel-analytics/bigdl-llm-tutorial). +- `bigdl-llm` tutorial is released [here](https://github.com/intel-analytics/bigdl-llm-tutorial). - Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, MPT, Falcon, Dolly-v1/Dolly-v2, StarCoder, Whisper, QWen, Baichuan, MOSS,* and more; see the complete list [here](python/llm/README.md#verified-models). ### `bigdl-llm` Demos -See the ***optimized performance*** of `chatglm2-6b`, `llama-2-13b-chat`, and `starcoder-15b` models on a 12th Gen Intel Core CPU below. +See the ***optimized performance*** of `chatglm2-6b`, `llama-2-13b-chat`, and `starcoder-15.5b` models on a 12th Gen Intel Core CPU below.

- - + +

-### `bigdl-llm` quick start +### `bigdl-llm` quickstart #### Install You may install **`bigdl-llm`** as follows: diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst index 4be55297..fc5a9549 100644 --- a/docs/readthedocs/source/index.rst +++ b/docs/readthedocs/source/index.rst @@ -1,10 +1,14 @@ .. meta:: :google-site-verification: S66K6GAclKw1RroxU0Rka_2d1LZFVe27M0gRneEsIVI -BigDL: fast and secure AI +================================================= +The BigDL Project ================================================= -BigDL-LLM +------ + +--------------------------------- +BigDL-LLM: low-Bit LLM library --------------------------------- `bigdl-llm `_ is a library for running **LLM** (large language model) on your Intel **laptop** or **GPU** using INT4 with very low latency [*]_ (for any **PyTorch** model). @@ -12,13 +16,29 @@ BigDL-LLM It is built on top of the excellent work of `llama.cpp `_, `gptq `_, `bitsandbytes `_, `qlora `_, etc. +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Latest update ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - ``bigdl-llm`` now supports Intel Arc and Flex GPU; see the the latest GPU examples `here `_. -- ``bigdl-llm`` tutorial tutorial is made availabe `here `_. -- Over a dozen models have been verified on ``bigdl-llm``, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, MPT, Falcon, Dolly-v1/Dolly-v2, StarCoder, Whisper, QWen, Baichuan,* and more; see the complete list `here `_. +- ``bigdl-llm`` tutorial is released `here `_. +- Over 20 models have been verified on ``bigdl-llm``, including *LLaMA/LLaMA2, ChatGLM/ChatGLM2, MPT, Falcon, Dolly-v1/Dolly-v2, StarCoder, Whisper, QWen, Baichuan,* and more; see the complete list `here `_. -bigdl-llm quickstart + +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``bigdl-llm`` demos +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +See the **optimized performance** of ``chatglm2-6b``, ``llama-2-13b-chat``, and ``starcoder-15.5b`` models on a 12th Gen Intel Core CPU below. + +.. raw:: html + +

+ + +

+ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +``bigdl-llm`` quickstart ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You may install ``bigdl-llm`` as follows: @@ -50,6 +70,7 @@ You can then apply INT4 optimizations to any Hugging Face *Transformers* models ------ +--------------------------------- Overview of the complete BigDL project --------------------------------- `BigDL `_ seamlessly scales your data analytics & AI applications from laptop to cloud, with the following libraries: @@ -64,6 +85,7 @@ Overview of the complete BigDL project ------ +--------------------------------- Choosing the right BigDL library --------------------------------- @@ -129,4 +151,4 @@ Choosing the right BigDL library ------ -.. [*] Performance varies by use, configuration and other factors. `bigdl-llm` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex. +.. [*] Performance varies by use, configuration and other factors. ``bigdl-llm`` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex. diff --git a/python/llm/README.md b/python/llm/README.md index ee0069aa..6ae4d90c 100644 --- a/python/llm/README.md +++ b/python/llm/README.md @@ -8,11 +8,11 @@ - `bigdl-llm` now supports Intel Arc or Flex GPU; see the the latest GPU examples [here](example/gpu). ### Demos -See the ***optimized performance*** of `chatglm2-6b`, `llama-2-13b-chat`, and `starcoder-15b` models on a 12th Gen Intel Core CPU below. +See the ***optimized performance*** of `chatglm2-6b`, `llama-2-13b-chat`, and `starcoder-15.5b` models on a 12th Gen Intel Core CPU below.

- - + +

### Verified models