* Change order of LLM in header * Some updates to footer * Add BigDL-LLM index page and basic file structure * Update index page for key features * Add initial content for BigDL-LLM in 5 mins * Improvement to footnote * Add initial contents based on current contents we have * Add initial quick links * Small fix * Rename file * Hide cli section for now and change model supports to examples * Hugging Face format -> Hugging Face transformers format * Add placeholder for GPU supports * Add GPU related content structure * Add cpu/gpu installation initial contents * Add initial contents for GPU supports * Add image link to LLM index page * Hide tips and known issues for now * Small fix * Update based on comments * Small fix * Add notes for Python 3.9 * Add placehoder optimize model & reveal CLI; small revision * examples add gpu part * Hide CLI part again for first version of merging * add keyfeatures-optimize_model part (#1) * change gif link to the ones hosted on github * Small fix --------- Co-authored-by: plusbang <binbin1.deng@intel.com> Co-authored-by: binbin Deng <108676127+plusbang@users.noreply.github.com>
1.7 KiB
1.7 KiB
BigDL-LLM Installation: CPU
Quick Installation
Install BigDL-LLM for CPU supports using pip through:
pip install bigdl-llm[all]
.. note::
``all`` option will trigger installation of all the dependencies for common LLM application development.
.. important::
``bigdl-llm`` is tested with Python 3.9, which is recommended for best practices.
Recommended Requirements
Here list the recommended hardware and OS for smooth BigDL-LLM optimization experiences on CPU:
-
Hardware
- PCs equipped with 12th Gen Intel® Core™ processor or higher, and at least 16GB RAM
- Servers equipped with Intel® Xeon® processors, at least 32G RAM.
-
Operating System
- Ubuntu 20.04 or later
- CentOS 7 or later
- Windows 10/11, with or without WSL
Environment Setup
For optimal performance with LLM models using BigDL-LLM optimizations on Intel CPUs, here are some best practices for setting up environment:
First we recommend using Conda to create a python 3.9 enviroment:
conda create -n llm python=3.9
conda activate llm
pip install bigdl-llm[all] # install bigdl-llm for CPU with 'all' option
Then for running a LLM model with BigDL-LLM optimizations (taking an example.py an example):
.. tabs::
.. tab:: Client
It is recommended to run directly with full utilization of all CPU cores:
.. code-block:: bash
python example.py
.. tab:: Server
It is recommended to run with all the physical cores of a single socket:
.. code-block:: bash
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python example.py