Update miniconda/anaconda -> miniforge in documentation (#11176)

* Update miniconda/anaconda -> miniforge in installation guide

* Update for all Quickstart

* further fix for docs
This commit is contained in:
Yuwen Hu 2024-05-30 17:40:18 +08:00 committed by GitHub
parent c0f1be6aea
commit f0aaa130a9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
11 changed files with 39 additions and 39 deletions

View file

@ -5,7 +5,7 @@ We can run PyTorch Inference Benchmark, Chat Service and PyTorch Examples on Int
```eval_rst ```eval_rst
.. note:: .. note::
The current Windows + WSL + Docker solution only supports Arc series dGPU. For Windows users with MTL iGPU, it is recommended to install directly via pip install in Anaconda Prompt. Refer to `this guide <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html>`_. The current Windows + WSL + Docker solution only supports Arc series dGPU. For Windows users with MTL iGPU, it is recommended to install directly via pip install in Miniforge Prompt. Refer to `this guide <https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html>`_.
``` ```

View file

@ -10,7 +10,7 @@ The `sycl-ls` tool enumerates a list of devices available in the system. You can
.. tabs:: .. tabs::
.. tab:: Windows .. tab:: Windows
Please make sure you are using CMD (Anaconda Prompt if using conda): Please make sure you are using CMD (Miniforge Prompt if using conda):
.. code-block:: cmd .. code-block:: cmd

View file

@ -51,7 +51,7 @@ Here list the recommended hardware and OS for smooth IPEX-LLM optimization exper
For optimal performance with LLM models using IPEX-LLM optimizations on Intel CPUs, here are some best practices for setting up environment: For optimal performance with LLM models using IPEX-LLM optimizations on Intel CPUs, here are some best practices for setting up environment:
First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment: First we recommend using [Conda](https://conda-forge.org/download/) to create a python 3.11 enviroment:
```eval_rst ```eval_rst
.. tabs:: .. tabs::

View file

@ -45,7 +45,7 @@ If you have driver version lower than `31.0.101.5122`, it is recommended to [**u
### Install IPEX-LLM ### Install IPEX-LLM
#### Install IPEX-LLM From PyPI #### Install IPEX-LLM From PyPI
We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment. We recommend using [Miniforge](https://conda-forge.org/download/) to create a python 3.11 enviroment.
```eval_rst ```eval_rst
.. important:: .. important::
@ -108,7 +108,7 @@ pip install --pre --upgrade ipex-llm[xpu]
To use GPU acceleration on Windows, several environment variables are required before running a GPU example: To use GPU acceleration on Windows, several environment variables are required before running a GPU example:
<!-- Make sure you are using CMD (Anaconda Prompt if using conda) as PowerShell is not supported, and configure oneAPI environment variables with: <!-- Make sure you are using CMD (Miniforge Prompt if using conda) as PowerShell is not supported, and configure oneAPI environment variables with:
```cmd ```cmd
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
@ -157,11 +157,11 @@ If you met error when importing `intel_extension_for_pytorch`, please ensure tha
conda install libuv conda install libuv
``` ```
<!-- * For oneAPI installed using the Offline installer, make sure you have configured oneAPI environment variables in your Anaconda Prompt through <!-- * For oneAPI installed using the Offline installer, make sure you have configured oneAPI environment variables in your Miniforge Prompt through
```cmd ```cmd
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
``` ```
Please note that you need to set these environment variables again once you have a new Anaconda Prompt window. --> Please note that you need to set these environment variables again once you have a new Miniforge Prompt window. -->
## Linux ## Linux
@ -434,7 +434,7 @@ IPEX-LLM GPU support on Linux has been verified on:
### Install IPEX-LLM ### Install IPEX-LLM
#### Install IPEX-LLM From PyPI #### Install IPEX-LLM From PyPI
We recommend using [miniconda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment: We recommend using [Miniforge](https://conda-forge.org/download/ to create a python 3.11 enviroment:
```eval_rst ```eval_rst
.. important:: .. important::

View file

@ -48,7 +48,7 @@ Now we need to pull a model for coding. Here we use [CodeQWen1.5-7B](https://hug
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: cmd .. code-block:: cmd
@ -72,7 +72,7 @@ Start by creating a file named `Modelfile` with the following content:
FROM codeqwen:latest FROM codeqwen:latest
PARAMETER num_ctx 4096 PARAMETER num_ctx 4096
``` ```
Next, use the following commands in the terminal (Linux) or Anaconda Prompt (Windows) to create a new model in Ollama named `codeqwen:latest-continue`: Next, use the following commands in the terminal (Linux) or Miniforge Prompt (Windows) to create a new model in Ollama named `codeqwen:latest-continue`:
```bash ```bash
@ -81,7 +81,7 @@ Next, use the following commands in the terminal (Linux) or Anaconda Prompt (Win
After creation, run `ollama list` to see `codeqwen:latest-continue` in the list of models. After creation, run `ollama list` to see `codeqwen:latest-continue` in the list of models.
Finally, preload the new model by executing the following command in a new terminal (Linux) or Anaconda prompt (Windows): Finally, preload the new model by executing the following command in a new terminal (Linux) or Miniforge Prompt (Windows):
```bash ```bash
ollama run codeqwen:latest-continue ollama run codeqwen:latest-continue

View file

@ -153,10 +153,10 @@ sudo dpkg -i *.deb
### Setup Python Environment ### Setup Python Environment
Download and install the Miniconda as follows if you don't have conda installed on your machine: Download and install the Miniforge as follows if you don't have conda installed on your machine:
```bash ```bash
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh bash Miniforge3-Linux-x86_64.sh
source ~/.bashrc source ~/.bashrc
``` ```
@ -259,7 +259,7 @@ To use GPU acceleration on Linux, several environment variables are required or
Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface.co/microsoft/phi-1_5) model, a 1.3 billion parameter LLM for this demostration. Follow the steps below to setup and run the model, and observe how it responds to a prompt "What is AI?". Now let's play with a real LLM. We'll be using the [phi-1.5](https://huggingface.co/microsoft/phi-1_5) model, a 1.3 billion parameter LLM for this demostration. Follow the steps below to setup and run the model, and observe how it responds to a prompt "What is AI?".
* Step 1: Open the **Anaconda Prompt** and activate the Python environment `llm` you previously created: * Step 1: Activate the Python environment `llm` you previously created:
```bash ```bash
conda activate llm conda activate llm
``` ```

View file

@ -39,13 +39,13 @@ Download and install the latest GPU driver from the [official Intel download pag
### Setup Python Environment ### Setup Python Environment
Visit [Miniconda installation page](https://docs.anaconda.com/free/miniconda/), download the **Miniconda installer for Windows**, and follow the instructions to complete the installation. Visit [Miniforge installation page](https://conda-forge.org/download/), download the **Miniforge installer for Windows**, and follow the instructions to complete the installation.
<div align="center"> <div align="center">
<img src="https://llm-assets.readthedocs.io/en/latest/_images/quickstart_windows_gpu_5.png" width=70%/> <img src="https://llm-assets.readthedocs.io/en/latest/_images/quickstart_windows_gpu_miniforge_download.png" width=80%/>
</div> </div>
After installation, open the **Anaconda Prompt**, create a new python environment `llm`: After installation, open the **Miniforge Prompt**, create a new python environment `llm`:
```cmd ```cmd
conda create -n llm python=3.11 libuv conda create -n llm python=3.11 libuv
``` ```
@ -83,7 +83,7 @@ With the `llm` environment active, use `pip` to install `ipex-llm` for GPU. Choo
You can verify if `ipex-llm` is successfully installed following below steps. You can verify if `ipex-llm` is successfully installed following below steps.
### Step 1: Runtime Configurations ### Step 1: Runtime Configurations
* Open the **Anaconda Prompt** and activate the Python environment `llm` you previously created: * Open the **Miniforge Prompt** and activate the Python environment `llm` you previously created:
```cmd ```cmd
conda activate llm conda activate llm
``` ```
@ -117,9 +117,9 @@ You can verify if `ipex-llm` is successfully installed following below steps.
### Step 2: Run Python Code ### Step 2: Run Python Code
* Launch the Python interactive shell by typing `python` in the Anaconda prompt window and then press Enter. * Launch the Python interactive shell by typing `python` in the Miniforge Prompt window and then press Enter.
* Copy following code to Anaconda prompt **line by line** and press Enter **after copying each line**. * Copy following code to Miniforge Prompt **line by line** and press Enter **after copying each line**.
```python ```python
import torch import torch
from ipex_llm.transformers import AutoModel,AutoModelForCausalLM from ipex_llm.transformers import AutoModel,AutoModelForCausalLM
@ -211,7 +211,7 @@ Now let's play with a real LLM. We'll be using the [Qwen-1.8B-Chat](https://hugg
.. tab:: ModelScope .. tab:: ModelScope
Please first run following command in Anaconda Prompt to install ModelScope: Please first run following command in Miniforge Prompt to install ModelScope:
.. code-block:: cmd .. code-block:: cmd

View file

@ -75,7 +75,7 @@ Under your current directory, exceuting below command to do inference with Llama
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash
@ -94,7 +94,7 @@ Under your current directory, you can also execute below command to have interac
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash
@ -138,7 +138,7 @@ Launch the Ollama service:
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash
@ -183,7 +183,7 @@ Keep the Ollama service on and open another terminal and run llama3 with `ollama
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash

View file

@ -49,7 +49,7 @@ To use `llama.cpp` with IPEX-LLM, first ensure that `ipex-llm[cpp]` is installed
.. note:: .. note::
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: cmd .. code-block:: cmd
@ -86,7 +86,7 @@ Then you can use following command to initialize `llama.cpp` with IPEX-LLM:
.. tab:: Windows .. tab:: Windows
Please run the following command with **administrator privilege in Anaconda Prompt**. Please run the following command with **administrator privilege in Miniforge Prompt**.
.. code-block:: bash .. code-block:: bash
@ -127,7 +127,7 @@ To use GPU acceleration, several environment variables are required or recommend
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash
@ -169,7 +169,7 @@ Before running, you should download or copy community GGUF model to your current
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash

View file

@ -39,7 +39,7 @@ Activate the `llm-cpp` conda environment and initialize Ollama by executing the
.. tab:: Windows .. tab:: Windows
Please run the following command with **administrator privilege in Anaconda Prompt**. Please run the following command with **administrator privilege in Miniforge Prompt**.
.. code-block:: bash .. code-block:: bash
@ -76,7 +76,7 @@ You may launch the Ollama service as below:
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash
@ -149,7 +149,7 @@ model**, e.g. `dolphin-phi`.
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash
@ -187,7 +187,7 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile`
.. tab:: Windows .. tab:: Windows
Please run the following command in Anaconda Prompt. Please run the following command in Miniforge Prompt.
.. code-block:: bash .. code-block:: bash

View file

@ -30,7 +30,7 @@ Download the `text-generation-webui` with IPEX-LLM integrations from [this link]
#### Install Dependencies #### Install Dependencies
Open **Anaconda Prompt** and activate the conda environment you have created in [section 1](#1-install-ipex-llm), e.g., `llm`. Open **Miniforge Prompt** and activate the conda environment you have created in [section 1](#1-install-ipex-llm), e.g., `llm`.
``` ```
conda activate llm conda activate llm
``` ```
@ -50,7 +50,7 @@ pip install -r extensions/openai/requirements.txt
### 3 Start the WebUI Server ### 3 Start the WebUI Server
#### Set Environment Variables #### Set Environment Variables
Configure oneAPI variables by running the following command in **Anaconda Prompt**: Configure oneAPI variables by running the following command in **Miniforge Prompt**:
```eval_rst ```eval_rst
.. note:: .. note::
@ -67,7 +67,7 @@ set BIGDL_LLM_XMX_DISABLED=1
``` ```
#### Launch the Server #### Launch the Server
In **Anaconda Prompt** with the conda environment `llm` activated, navigate to the `text-generation-webui` folder and execute the following commands (You can optionally lanch the server with or without the API service): In **Miniforge Prompt** with the conda environment `llm` activated, navigate to the `text-generation-webui` folder and execute the following commands (You can optionally lanch the server with or without the API service):
##### without API service ##### without API service
```cmd ```cmd
@ -154,7 +154,7 @@ Enter prompts into the textbox at the bottom and press the **Generate** button t
#### Exit the WebUI #### Exit the WebUI
To shut down the WebUI server, use **Ctrl+C** in the **Anaconda Prompt** terminal where the WebUI Server is runing, then close your browser tab. To shut down the WebUI server, use **Ctrl+C** in the **Miniforge Prompt** terminal where the WebUI Server is runing, then close your browser tab.
### 5. Advanced Usage ### 5. Advanced Usage
@ -203,7 +203,7 @@ The first response to user prompt might be slower than expected, with delays of
During model loading, you may encounter an **ImportError** like `ImportError: This modeling file requires the following packages that were not found in your environment`. This indicates certain packages required by the model are absent from your environment. Detailed instructions for installing these necessary packages can be found at the bottom of the error messages. Take the following steps to fix these errors: During model loading, you may encounter an **ImportError** like `ImportError: This modeling file requires the following packages that were not found in your environment`. This indicates certain packages required by the model are absent from your environment. Detailed instructions for installing these necessary packages can be found at the bottom of the error messages. Take the following steps to fix these errors:
- Exit the WebUI Server by pressing **Ctrl+C** in the **Anaconda Prompt** terminal. - Exit the WebUI Server by pressing **Ctrl+C** in the **Miniforge Prompt** terminal.
- Install the missing pip packages as specified in the error message - Install the missing pip packages as specified in the error message
- Restart the WebUI Server. - Restart the WebUI Server.