ipex-llm/python/llm
2023-06-08 16:52:17 +08:00
..
dev [LLM] Add dev wheel building and basic UT script for LLM package on Linux (#8264) 2023-06-08 00:49:57 +08:00
src/bigdl fix style (#8300) 2023-06-08 16:52:17 +08:00
test [LLM] Enable UT workflow logics for LLM (#8243) 2023-06-02 17:06:35 +08:00
README.md LLM: Command line wrapper for llama/bloom/gptneox (#8239) 2023-06-08 14:55:22 +08:00
setup.py LLM: Command line wrapper for llama/bloom/gptneox (#8239) 2023-06-08 14:55:22 +08:00

BigDL LLM

llm-cli

llm-cli is a command-line interface tool that allows easy execution of llama/gptneox/bloom models and generates results based on the provided prompt.

Usage

llm-cli -x <llama/gptneox/bloom> [-h] [args]

args are the arguments provided to the specified model program. You can use -x MODEL_FAMILY -h to retrieve the parameter list for a specific MODEL_FAMILY, for example:

llm-cli.sh -x llama -h

# Output:
# usage: main-llama [options]
#
# options:
#   -h, --help show this help message and exit
#   -i, --interactive run in interactive mode
#   --interactive-first run in interactive mode and wait for input right away
#   ...

Examples

Here are some examples of how to use the llm-cli tool:

Completion:

llm-cli.sh -t 16 -x llama -m ./llm-llama-model.bin -p 'Once upon a time,'

Chatting:

llm-cli.sh -t 16 -x llama -m ./llm-llama-model.bin -i --color

Feel free to explore different options and experiment with the llama/gptneox/bloom models using llm-cli!