Skip to content

Commit c0497a7

Browse files
Add completion method to Promptic for direct LLM interactions (#38)
- Implement new `completion()` method to provide direct access to LLM responses - Add warning when using completion with state/memory enabled - Update README.md with documentation and example for new method - Add comprehensive test coverage for the new completion method - Bump version to 5.3.0 to reflect the new feature
1 parent 9d0e634 commit c0497a7

File tree

3 files changed

+115
-8
lines changed

3 files changed

+115
-8
lines changed

README.md

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -258,14 +258,12 @@ print(greet("John"))
258258

259259
```
260260

261-
262261
### Observability
263262

264263
Promptic integrates with [Weave](https://wandb.ai) to trace function calls and LLM interactions.
265264

266265
<img width="981" alt="Screenshot 2025-02-07 at 6 02 25 PM" src="https://github.com/user-attachments/assets/3ccf4602-3557-455a-838f-7e4b8c2ec21a" />
267266

268-
269267
```py
270268
# examples/weave_integration.py
271269

@@ -303,8 +301,6 @@ print("".join(write_poem("artificial intelligence")))
303301

304302
```
305303

306-
307-
308304
### Error Handling and Dry Runs
309305

310306
Dry runs allow you to see which tools will be called and their arguments without invoking the decorated tool functions. You can also enable debug mode for more detailed logging.
@@ -614,6 +610,31 @@ Base class for managing conversation memory and state. Can be extended to implem
614610
- `get_messages(prompt: str = None, limit: int = None) -> List[dict]`: Retrieve conversation history, optionally limited to the most recent messages and filtered by a prompt.
615611
- `clear()`: Clear all stored messages.
616612

613+
### `Promptic`
614+
615+
The main class for creating LLM-powered functions and managing conversations.
616+
617+
#### Methods
618+
619+
- `message(message: str, **kwargs)`: Send a message directly to the LLM and get a response. Useful for follow-up questions or direct interactions.
620+
- `completion(messages: list[dict], **kwargs)`: Send a list of messages directly to the LLM and get the raw completion response. Useful for more control over the conversation flow.
621+
622+
```python
623+
from promptic import Promptic
624+
625+
p = Promptic(model="gpt-4o-mini")
626+
messages = [
627+
{"role": "user", "content": "What is the capital of France?"},
628+
{"role": "assistant", "content": "The capital of France is Paris."},
629+
{"role": "user", "content": "What is its population?"}
630+
]
631+
response = p.completion(messages)
632+
print(response.choices[0].message.content)
633+
```
634+
635+
- `tool(fn)`: Register a function as a tool that can be called by the LLM.
636+
- `clear()`: Clear all stored messages from memory. Raises ValueError if memory/state is not enabled.
637+
617638
#### Example
618639

619640
```py

promptic.py

Lines changed: 30 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,21 +3,21 @@
33

44
warnings.filterwarnings("ignore", message="Valid config keys have changed in V2:*")
55

6+
import base64
67
import inspect
78
import json
89
import logging
910
import re
10-
import base64
1111
from functools import wraps
1212
from textwrap import dedent
13-
from typing import Callable, Dict, Any, List, Optional, Union
13+
from typing import Any, Callable, Dict, List, Optional, Union
1414

1515
import litellm
1616
from jsonschema import validate as validate_json_schema
17-
from pydantic import BaseModel
1817
from litellm import completion as litellm_completion
18+
from pydantic import BaseModel
1919

20-
__version__ = "5.2.1"
20+
__version__ = "5.3.0"
2121

2222
SystemPrompt = Optional[Union[str, List[str], List[Dict[str, str]]]]
2323

@@ -190,6 +190,32 @@ def _completion(self, messages: list[dict], **kwargs):
190190

191191
return completion_messages, completion
192192

193+
def completion(self, messages: list[dict], **kwargs):
194+
"""Return the raw completion response from the LLM for a list of messages.
195+
196+
This method provides direct access to the underlying LLM completion API, allowing
197+
more control over the conversation flow. Unlike the message method, it accepts
198+
a list of messages and returns the raw completion response.
199+
200+
Args:
201+
messages (list[dict]): A list of message dictionaries, each with 'role' and 'content' keys.
202+
Example: [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}]
203+
**kwargs: Additional arguments passed to the completion function.
204+
205+
Returns:
206+
The raw completion response from the LLM.
207+
208+
Warning:
209+
If state/memory is enabled, this method will warn that it's being called directly
210+
as it may lead to unexpected behavior with conversation history.
211+
"""
212+
if self.state:
213+
warnings.warn(
214+
"State is enabled, but completion is being called directly. This can cause unexpected behavior.",
215+
UserWarning,
216+
)
217+
return self._completion(messages, **kwargs)[1]
218+
193219
def message(self, message: str, **kwargs):
194220
messages = [{"content": message, "role": "user"}]
195221
completion_messages, response = self._completion(messages, **kwargs)

tests/test_promptic.py

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1563,3 +1563,63 @@ def analyze_image_feature(img: ImageBytes, feature: str):
15631563
text_result = analyze_image_feature(image_data, "text or letters")
15641564
assert isinstance(text_result, str)
15651565
assert "ai" in text_result.lower() or "oc" in text_result.lower()
1566+
1567+
1568+
@pytest.mark.parametrize("model", CHEAP_MODELS)
1569+
@pytest.mark.parametrize(
1570+
"create_completion_fn", [openai_completion_fn, litellm_completion]
1571+
)
1572+
def test_completion_method(model, create_completion_fn):
1573+
"""Test the direct completion method of Promptic"""
1574+
if create_completion_fn == openai_completion_fn and not model.startswith("gpt"):
1575+
pytest.skip("Non-GPT models are not supported with OpenAI client")
1576+
1577+
p = Promptic(
1578+
model=model,
1579+
temperature=0,
1580+
timeout=5,
1581+
create_completion_fn=create_completion_fn,
1582+
)
1583+
1584+
# Test basic completion with a single message
1585+
messages = [{"role": "user", "content": "What is the capital of France?"}]
1586+
response = p.completion(messages)
1587+
assert "Paris" in response.choices[0].message.content
1588+
1589+
# Test completion with multiple messages
1590+
messages = [
1591+
{"role": "user", "content": "What is the capital of France?"},
1592+
{"role": "assistant", "content": "The capital of France is Paris."},
1593+
{"role": "user", "content": "What is its population?"},
1594+
]
1595+
response = p.completion(messages)
1596+
assert any(
1597+
word in response.choices[0].message.content.lower()
1598+
for word in ["million", "inhabitants", "people"]
1599+
)
1600+
1601+
# Test completion with system message
1602+
p_with_system = Promptic(
1603+
model=model,
1604+
temperature=0,
1605+
timeout=5,
1606+
system="You are a geography expert",
1607+
create_completion_fn=create_completion_fn,
1608+
)
1609+
messages = [{"role": "user", "content": "What is the capital of France?"}]
1610+
response = p_with_system.completion(messages)
1611+
assert "Paris" in response.choices[0].message.content
1612+
1613+
# Test warning when using with state enabled
1614+
p_with_state = Promptic(
1615+
model=model,
1616+
temperature=0,
1617+
timeout=5,
1618+
memory=True,
1619+
create_completion_fn=create_completion_fn,
1620+
)
1621+
with pytest.warns(
1622+
UserWarning, match="State is enabled, but completion is being called directly"
1623+
):
1624+
response = p_with_state.completion(messages)
1625+
assert "Paris" in response.choices[0].message.content

0 commit comments

Comments
 (0)