* Add a new CPU example of Yuan2-2B-hf * Add a new CPU generate.py of Yuan2-2B-hf example * Add a new GPU example of Yuan2-2B-hf * Add Yuan2 to README table * In CPU example:1.Use English as default prompt; 2.Provide modified files in yuan2-2B-instruct * In GPU example:1.Use English as default prompt;2.Provide modified files * GPU example:update README * update Yuan2-2B-hf in README table * Add CPU example for Yuan2-2B in Pytorch-Models * Add GPU example for Yuan2-2B in Pytorch-Models * Add license in generate.py; Modify README * In GPU Add license in generate.py; Modify README * In CPU yuan2 modify README * In GPU yuan2 modify README * In CPU yuan2 modify README * In GPU example, updated the readme for Windows GPU supports * In GPU torch example, updated the readme for Windows GPU supports * GPU hf example README modified * GPU example README modified |
||
|---|---|---|
| .. | ||
| aquila | ||
| aquila2 | ||
| baichuan | ||
| baichuan2 | ||
| bluelm | ||
| chatglm | ||
| chatglm2 | ||
| chatglm3 | ||
| codellama | ||
| codeshell | ||
| distil-whisper | ||
| dolly_v1 | ||
| dolly_v2 | ||
| falcon | ||
| flan-t5 | ||
| fuyu | ||
| internlm | ||
| internlm-xcomposer | ||
| internlm2 | ||
| llama2 | ||
| mistral | ||
| mixtral | ||
| moss | ||
| mpt | ||
| phi-1_5 | ||
| phi-2 | ||
| phixtral | ||
| phoenix | ||
| qwen | ||
| qwen-vl | ||
| qwen1.5 | ||
| redpajama | ||
| replit | ||
| skywork | ||
| solar | ||
| starcoder | ||
| vicuna | ||
| whisper | ||
| wizardcoder-python | ||
| yi | ||
| yuan2 | ||
| ziya | ||
| README.md | ||
BigDL-LLM Transformers INT4 Optimization for Large Language Model
You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.
Recommended Requirements
To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).
For OS, BigDL-LLM supports Ubuntu 20.04 or later (glibc>=2.17), CentOS 7 or later (glibc>=2.17), and Windows 10/11.
Best Known Configuration on Linux
For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM:
pip install bigdl-llm
source bigdl-llm-init