You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
|
|
9 months ago | |
|---|---|---|
| _llm | 9 months ago | |
| .gitignore | 9 months ago | |
| LICENSE | 9 months ago | |
| README.md | 9 months ago | |
| setup.py | 9 months ago | |
README.md
_llm
A Python package for interacting with LLM models through Ollama, supporting both remote API and local Ollama instances.
Installation
Install directly from GitHub:
pip install git+https://github.com/lasseedfast/_llm.git
Or clone and install for development:
git clone https://github.com/lasseedfast/_llm.git
cd _llm
pip install -e .
Dependencies
This package requires:
- env_manager:
pip install git+https://github.com/lasseedfast/env_manager.git - colorprinter:
pip install git+https://github.com/lasseedfast/colorprinter.git - ollama: For local model inference
- tiktoken: For token counting
- requests: For API communication
Environment Variables
The package requires several environment variables to be set:
LLM_API_URL: URL of the Ollama APILLM_API_USER: Username for API authenticationLLM_API_PWD_LASSE: Password for API authenticationLLM_MODEL: Standard model nameLLM_MODEL_SMALL: Small model nameLLM_MODEL_VISION: Vision model nameLLM_MODEL_LARGE: Large context model nameLLM_MODEL_REASONING: Reasoning model nameLLM_MODEL_TOOLS: Tools model name
These can be set in a .env file in your project directory or in the ArangoDB environment document in the div database.
Basic Usage
from _llm import LLM
# Initialize the LLM
llm = LLM()
# Generate a response
result = llm.generate(
query="I want to add 2 and 2",
)
print(result.content)
Advanced Usage
Working with Images
from _llm import LLM
llm = LLM()
response = llm.generate(
query="What's in this image?",
images=["path/to/image.jpg"],
model="vision"
)
Streaming Responses
from _llm import LLM
llm = LLM()
for chunk_type, chunk in llm.generate(
query="Write a paragraph about AI",
stream=True
):
print(f"{chunk_type}: {chunk}")
Using Async API
import asyncio
from _llm import LLM
async def main():
llm = LLM()
response = await llm.async_generate(
query="What is machine learning?",
model="standard"
)
print(response)
asyncio.run(main())
License
MIT