You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
109 lines
2.1 KiB
109 lines
2.1 KiB
# llm_client |
|
|
|
A Python package for interacting with LLM models through Ollama, supporting both remote API and local Ollama instances. |
|
|
|
## Installation |
|
|
|
Install directly from GitHub: |
|
|
|
```bash |
|
pip install git+https://github.com/lasseedfast/_llm.git |
|
``` |
|
|
|
Or clone and install for development: |
|
|
|
```bash |
|
git clone https://github.com/lasseedfast/_llm.git |
|
cd _llm |
|
pip install -e . |
|
``` |
|
|
|
## Dependencies |
|
|
|
This package requires: |
|
|
|
- env_manager: `pip install git+https://github.com/lasseedfast/env_manager.git` |
|
- colorprinter: `pip install git+https://github.com/lasseedfast/colorprinter.git` |
|
- ollama: For local model inference |
|
- tiktoken: For token counting |
|
- requests: For API communication |
|
|
|
## Environment Variables |
|
|
|
The package requires several environment variables to be set: |
|
|
|
- `LLM_API_URL`: URL of the Ollama API |
|
- `LLM_API_USER`: Username for API authentication |
|
- `LLM_API_PWD_LASSE`: Password for API authentication |
|
- `LLM_MODEL`: Standard model name |
|
- `LLM_MODEL_SMALL`: Small model name |
|
- `LLM_MODEL_VISION`: Vision model name |
|
- `LLM_MODEL_LARGE`: Large context model name |
|
- `LLM_MODEL_REASONING`: Reasoning model name |
|
- `LLM_MODEL_TOOLS`: Tools model name |
|
|
|
These can be set in a `.env` file in your project directory or in the ArangoDB environment document in the div database. |
|
|
|
## Basic Usage |
|
|
|
```python |
|
from llm_client import LLM |
|
|
|
# Initialize the LLM |
|
llm = LLM() |
|
|
|
# Generate a response |
|
result = llm.generate( |
|
query="I want to add 2 and 2", |
|
) |
|
print(result.content) |
|
``` |
|
|
|
## Advanced Usage |
|
|
|
### Working with Images |
|
|
|
```python |
|
from llm_client import LLM |
|
|
|
llm = LLM() |
|
response = llm.generate( |
|
query="What's in this image?", |
|
images=["path/to/image.jpg"], |
|
model="vision" |
|
) |
|
``` |
|
|
|
### Streaming Responses |
|
|
|
```python |
|
from llm_client import LLM |
|
|
|
llm = LLM() |
|
for chunk_type, chunk in llm.generate( |
|
query="Write a paragraph about AI", |
|
stream=True |
|
): |
|
print(f"{chunk_type}: {chunk}") |
|
``` |
|
|
|
### Using Async API |
|
|
|
```python |
|
import asyncio |
|
from llm_client import LLM |
|
|
|
async def main(): |
|
llm = LLM() |
|
response = await llm.async_generate( |
|
query="What is machine learning?", |
|
model="standard" |
|
) |
|
print(response) |
|
|
|
asyncio.run(main()) |
|
``` |
|
|
|
## License |
|
|
|
MIT |