* Change installation address Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example * Change Prompt Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence * Create and update model minicpm * Update model minicpm Update model minicpm under GPU/PyTorch-Models * Update readme and generate.py change "prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)" and delete "pip install transformers==4.37.0 " * Update comments for minicpm GPU Update comments for generate.py at minicpm GPU * Add CPU example for MiniCPM * Update minicpm README for CPU * Update README for MiniCPM and Llama3 * Update Readme for Llama3 CPU Pytorch * Update and fix comments for MiniCPM  | 
			||
|---|---|---|
| .. | ||
| aquila2 | ||
| bark | ||
| bert | ||
| bluelm | ||
| chatglm | ||
| chatglm3 | ||
| codegeex2 | ||
| codegemma | ||
| codellama | ||
| codeshell | ||
| cohere | ||
| deciLM-7b | ||
| deepseek | ||
| deepseek-moe | ||
| distil-whisper | ||
| flan-t5 | ||
| fuyu | ||
| internlm-xcomposer | ||
| internlm2 | ||
| llama2 | ||
| llama3 | ||
| llava | ||
| mamba | ||
| meta-llama | ||
| minicpm | ||
| mistral | ||
| mixtral | ||
| openai-whisper | ||
| phi-1_5 | ||
| phi-2 | ||
| phi-3 | ||
| phixtral | ||
| qwen-vl | ||
| qwen1.5 | ||
| skywork | ||
| solar | ||
| stablelm | ||
| wizardcoder-python | ||
| yi | ||
| yuan2 | ||
| ziya | ||
| README.md | ||
IPEX-LLM INT4 Optimization for Large Language Model
You can use optimize_model API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.
Recommended Requirements
To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).
For OS, IPEX-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.
Best Known Configuration on Linux
For better performance, it is recommended to set environment variables on Linux with the help of IPEX-LLM:
pip install ipex-llm
source ipex-llm-init